| Time | Namespace | Component | RelatedObject | Reason | Message |
|---|---|---|---|---|---|
openshift-monitoring |
metrics-server-66666c5bf-5b985 |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-66666c5bf-5b985 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-7879b848d6-f9pgk |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 15s finished | |
openshift-dns |
dns-default-t22wm |
Scheduled |
Successfully assigned openshift-dns/dns-default-t22wm to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-oauth-apiserver |
apiserver-6777f8cb5c-bcmz4 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-6777f8cb5c-bcmz4 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-6777f8cb5c-bcmz4 |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-dns |
node-resolver-l8bk2 |
Scheduled |
Successfully assigned openshift-dns/node-resolver-l8bk2 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-dns |
node-resolver-qs9t5 |
Scheduled |
Successfully assigned openshift-dns/node-resolver-qs9t5 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-dns |
dns-default-g5zzn |
Scheduled |
Successfully assigned openshift-dns/dns-default-g5zzn to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-cluster-csi-drivers |
azure-file-csi-driver-node-qgwhz |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-file-csi-driver-node-qgwhz to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-dns |
dns-default-6qw2v |
Scheduled |
Successfully assigned openshift-dns/dns-default-6qw2v to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-oauth-apiserver |
apiserver |
apiserver-6777f8cb5c-bcmz4 |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 15s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-6777f8cb5c-bcmz4 |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-6777f8cb5c-bcmz4 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-6777f8cb5c-bcmz4 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver-6777f8cb5c-cl69q |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-6777f8cb5c-cl69q |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-6777f8cb5c-cl69q |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver |
apiserver-6777f8cb5c-cl69q |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 15s finished | |
openshift-multus |
multus-additional-cni-plugins-w65vj |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-w65vj to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-6777f8cb5c-cl69q |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-6777f8cb5c-cl69q |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-6777f8cb5c-cl69q |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver-6777f8cb5c-jj8xw |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-6777f8cb5c-jj8xw |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-6777f8cb5c-jj8xw |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-6777f8cb5c-jj8xw to ci-op-9xx71rvq-1e28e-w667k-master-0 | ||
openshift-multus |
network-metrics-daemon-xcz98 |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-xcz98 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-apiserver |
apiserver |
apiserver-78d6c6c648-d7kss |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-apiserver |
apiserver |
apiserver-78d6c6c648-d7kss |
TerminationStoppedServing |
Server has stopped listening | |
openshift-multus |
network-metrics-daemon-p98p7 |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-p98p7 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-apiserver |
apiserver |
apiserver-78d6c6c648-d7kss |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-78d6c6c648-d7kss |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver |
apiserver-6777f8cb5c-jj8xw |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver |
apiserver-6777f8cb5c-jj8xw |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 15s finished | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-node-6wk8q |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-disk-csi-driver-node-6wk8q to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-monitoring |
kube-state-metrics-598b4cb887-xkxn7 |
Scheduled |
Successfully assigned openshift-monitoring/kube-state-metrics-598b4cb887-xkxn7 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-oauth-apiserver |
apiserver |
apiserver-6777f8cb5c-jj8xw |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-6777f8cb5c-jj8xw |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-6777f8cb5c-jj8xw |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-monitoring |
metrics-server-66666c5bf-2k6dh |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-66666c5bf-2k6dh to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-oauth-apiserver |
apiserver |
apiserver-7879b848d6-f9pgk |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-monitoring |
node-exporter-4xjkt |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-4xjkt to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-7879b848d6-f9pgk |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-7879b848d6-f9pgk |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-7879b848d6-f9pgk |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver |
apiserver-7879b848d6-qdcqz |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-controller-manager |
controller-manager-d8cbffd66-vbf7r |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-d8cbffd66-vbf7r to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-controller-manager |
controller-manager-d8cbffd66-vbf7r |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-d8cbffd66-vbf7r |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-7cfc668fc8-xtcks |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7cfc668fc8-xtcks to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-7879b848d6-qdcqz |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 15s finished | |
openshift-controller-manager |
controller-manager-7cfc668fc8-xtcks |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-7cfc668fc8-mplwz |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7cfc668fc8-mplwz to ci-op-9xx71rvq-1e28e-w667k-master-1 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-7879b848d6-qdcqz |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-7879b848d6-qdcqz |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-78d6c6c648-d7kss |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 15s finished | |
openshift-controller-manager |
controller-manager-7cfc668fc8-mplwz |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-7cfc668fc8-d2fkd |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7cfc668fc8-d2fkd to ci-op-9xx71rvq-1e28e-w667k-master-0 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-7879b848d6-qdcqz |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-controller-manager |
controller-manager-7cfc668fc8-d2fkd |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-7879b848d6-vbpk9 |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver |
apiserver-7879b848d6-vbpk9 |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 15s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-7879b848d6-vbpk9 |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-7879b848d6-vbpk9 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-cluster-node-tuning-operator |
tuned-k2wml |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-k2wml to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-7879b848d6-vbpk9 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-route-controller-manager |
route-controller-manager-6d7d8b6854-qjgq9 |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "route-controller-manager-6d7d8b6854-qjgq9": pod route-controller-manager-6d7d8b6854-qjgq9 is already assigned to node "ci-op-9xx71rvq-1e28e-w667k-master-0" | ||
openshift-apiserver |
apiserver-78d6c6c648-tcdpn |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-6d7d8b6854-qjgq9 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-6d7d8b6854-dlnkl |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6d7d8b6854-dlnkl to ci-op-9xx71rvq-1e28e-w667k-master-1 | ||
openshift-route-controller-manager |
route-controller-manager-6d7d8b6854-dlnkl |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-6d7d8b6854-9jxht |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6d7d8b6854-9jxht to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-route-controller-manager |
route-controller-manager-6d7d8b6854-9jxht |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-f74744fc5-czt9k |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-f74744fc5-czt9k |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-ingress |
router-default-7c66d9f4d8-hjjcl |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-apiserver |
apiserver-78d6c6c648-tcdpn |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-78d6c6c648-tcdpn to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-network-diagnostics |
network-check-source-775df55c85-86pxw |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-network-diagnostics |
network-check-source-775df55c85-86pxw |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-ovn-kubernetes |
ovnkube-node-fh4k2 |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-fh4k2 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-apiserver |
apiserver |
apiserver-78d6c6c648-tcdpn |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-apiserver |
apiserver |
apiserver-78d6c6c648-tcdpn |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 15s finished | |
openshift-apiserver |
apiserver |
apiserver-78d6c6c648-tcdpn |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-78d6c6c648-tcdpn |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-cloud-controller-manager |
azure-cloud-node-manager-p48ld |
Scheduled |
Successfully assigned openshift-cloud-controller-manager/azure-cloud-node-manager-p48ld to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-apiserver |
apiserver-78d6c6c648-d7kss |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-78d6c6c648-d7kss to ci-op-9xx71rvq-1e28e-w667k-master-0 | ||
openshift-apiserver |
apiserver-78d6c6c648-d7kss |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-cluster-csi-drivers |
azure-file-csi-driver-node-mft7l |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-file-csi-driver-node-mft7l to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-multus |
multus-additional-cni-plugins-9pnbf |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-9pnbf to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-marketplace |
community-operators-gv7zt |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-gv7zt to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-marketplace |
community-operators-gv6mm |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-gv6mm to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-marketplace |
community-operators-2czqg |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-2czqg to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-marketplace |
certified-operators-xlp8k |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-xlp8k to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-apiserver |
apiserver |
apiserver-7847c9d86c-tzr6j |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver |
apiserver-7847c9d86c-tzr6j |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-7847c9d86c-tzr6j |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-7847c9d86c-tzr6j |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 15s finished | |
openshift-marketplace |
certified-operators-q5sfs |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-q5sfs to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-cloud-controller-manager |
azure-cloud-node-manager-t6wgr |
Scheduled |
Successfully assigned openshift-cloud-controller-manager/azure-cloud-node-manager-t6wgr to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-apiserver |
apiserver |
apiserver-7847c9d86c-tzr6j |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-marketplace |
certified-operators-ff97n |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-ff97n to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-apiserver |
apiserver |
apiserver-78d6c6c648-tcdpn |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-multus |
multus-additional-cni-plugins-cnwtn |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-cnwtn to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-apiserver |
apiserver-78d6c6c648-zwlsw |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-cluster-csi-drivers |
azure-file-csi-driver-node-gz7kd |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-file-csi-driver-node-gz7kd to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-multus |
network-metrics-daemon-8xrbm |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-8xrbm to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-machine-config-operator |
machine-config-daemon-xjnf6 |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-xjnf6 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-machine-config-operator |
machine-config-daemon-p4qhk |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-p4qhk to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-apiserver |
apiserver-78d6c6c648-zwlsw |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-78d6c6c648-zwlsw |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-78d6c6c648-zwlsw to ci-op-9xx71rvq-1e28e-w667k-master-1 | ||
openshift-network-diagnostics |
network-check-source-775df55c85-86pxw |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-source-775df55c85-86pxw to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-marketplace |
redhat-marketplace-mbjsk |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-mbjsk to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-marketplace |
redhat-marketplace-qvmqw |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-qvmqw to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-marketplace |
redhat-marketplace-zq589 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-zq589 to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-apiserver |
apiserver |
apiserver-7847c9d86c-p5qtd |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver |
apiserver-7847c9d86c-p5qtd |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-7847c9d86c-p5qtd |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-7847c9d86c-p5qtd |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 15s finished | |
openshift-oauth-apiserver |
apiserver-f74744fc5-czt9k |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-f74744fc5-czt9k to ci-op-9xx71rvq-1e28e-w667k-master-1 | ||
openshift-oauth-apiserver |
apiserver-f74744fc5-d2ds7 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-f74744fc5-d2ds7 |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-f74744fc5-d2ds7 to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-oauth-apiserver |
apiserver-f74744fc5-xrzm4 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-f74744fc5-xrzm4 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-f74744fc5-xrzm4 |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-f74744fc5-xrzm4 to ci-op-9xx71rvq-1e28e-w667k-master-0 | ||
openshift-controller-manager |
controller-manager-7cfc668fc8-mplwz |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-dns |
node-resolver-7wq8n |
Scheduled |
Successfully assigned openshift-dns/node-resolver-7wq8n to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-cluster-node-tuning-operator |
tuned-p487g |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-p487g to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-cluster-node-tuning-operator |
tuned-lxhxn |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-lxhxn to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-operator-lifecycle-manager |
collect-profiles-28635045-pspjp |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-operator-lifecycle-manager |
collect-profiles-28635045-pspjp |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-operator-lifecycle-manager |
collect-profiles-28635045-pspjp |
FailedScheduling |
skip schedule deleting pod: openshift-operator-lifecycle-manager/collect-profiles-28635045-pspjp | ||
openshift-monitoring |
node-exporter-h2gfv |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-h2gfv to ci-op-9xx71rvq-1e28e-w667k-master-0 | ||
openshift-ingress |
router-default-7c66d9f4d8-hjjcl |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-ingress |
router-default-7c66d9f4d8-hjjcl |
Scheduled |
Successfully assigned openshift-ingress/router-default-7c66d9f4d8-hjjcl to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-ingress |
router-default-7c66d9f4d8-wk77v |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-ingress |
router-default-7c66d9f4d8-wk77v |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-ingress |
router-default-7c66d9f4d8-wk77v |
Scheduled |
Successfully assigned openshift-ingress/router-default-7c66d9f4d8-wk77v to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-ingress-canary |
ingress-canary-4skx2 |
Scheduled |
Successfully assigned openshift-ingress-canary/ingress-canary-4skx2 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-ingress-canary |
ingress-canary-rcqsw |
Scheduled |
Successfully assigned openshift-ingress-canary/ingress-canary-rcqsw to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-ingress-canary |
ingress-canary-xv252 |
Scheduled |
Successfully assigned openshift-ingress-canary/ingress-canary-xv252 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-route-controller-manager |
route-controller-manager-67956fc655-w4vck |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-67956fc655-w4vck to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-route-controller-manager |
route-controller-manager-67956fc655-w4vck |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-operator-lifecycle-manager |
collect-profiles-28635060-5nb2j |
FailedScheduling |
0/6 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling. | ||
openshift-operator-lifecycle-manager |
collect-profiles-28635060-5nb2j |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-28635060-5nb2j to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-monitoring |
node-exporter-j8fxj |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-j8fxj to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-network-operator |
iptables-alerter-vfv6g |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-vfv6g to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-network-operator |
iptables-alerter-sn28d |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-sn28d to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-marketplace |
redhat-operators-k7n2j |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-k7n2j to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-marketplace |
redhat-operators-nnx4s |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-nnx4s to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-apiserver |
apiserver |
apiserver-7847c9d86c-p5qtd |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-node-mv6v5 |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-disk-csi-driver-node-mv6v5 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-marketplace |
redhat-operators-zkfnp |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-zkfnp to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-monitoring |
node-exporter-lxgj9 |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-lxgj9 to ci-op-9xx71rvq-1e28e-w667k-master-1 | ||
openshift-monitoring |
node-exporter-ppw7h |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-ppw7h to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-monitoring |
node-exporter-w5svb |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-w5svb to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-cloud-controller-manager |
azure-cloud-node-manager-b7mbg |
Scheduled |
Successfully assigned openshift-cloud-controller-manager/azure-cloud-node-manager-b7mbg to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-network-operator |
iptables-alerter-hpmwj |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-hpmwj to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-cluster-csi-drivers |
azure-disk-csi-driver-node-qmbvr |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-disk-csi-driver-node-qmbvr to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-monitoring |
openshift-state-metrics-86886ccdb8-6v5s2 |
Scheduled |
Successfully assigned openshift-monitoring/openshift-state-metrics-86886ccdb8-6v5s2 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-monitoring |
prometheus-operator-9cd6bf8d5-d8nbk |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-9cd6bf8d5-d8nbk to ci-op-9xx71rvq-1e28e-w667k-master-2 | ||
openshift-monitoring |
prometheus-operator-admission-webhook-566b55489f-2ktqr |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-566b55489f-2ktqr |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-566b55489f-2ktqr |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-566b55489f-2ktqr to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-monitoring |
prometheus-operator-admission-webhook-566b55489f-wzvmv |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-ovn-kubernetes |
ovnkube-node-4hhxq |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-4hhxq to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-monitoring |
prometheus-operator-admission-webhook-566b55489f-wzvmv |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-566b55489f-wzvmv |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-566b55489f-wzvmv to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-multus |
multus-4gxw6 |
Scheduled |
Successfully assigned openshift-multus/multus-4gxw6 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-multus |
multus-7hlr6 |
Scheduled |
Successfully assigned openshift-multus/multus-7hlr6 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-network-diagnostics |
network-check-target-qp2gp |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-qp2gp to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | ||
openshift-apiserver |
apiserver-7c577f45d7-jlktw |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-7c577f45d7-jlktw to ci-op-9xx71rvq-1e28e-w667k-master-0 | ||
openshift-apiserver |
apiserver-7c577f45d7-jlktw |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-7c577f45d7-jlktw |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver |
apiserver-7c577f45d7-bp26v |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver |
apiserver-7c577f45d7-bp26v |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-7c577f45d7-bp26v |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-7847c9d86c-6gjp8 |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-apiserver |
apiserver |
apiserver-7c577f45d7-bp26v |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 15s finished | |
openshift-apiserver |
apiserver |
apiserver-7c577f45d7-bp26v |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-apiserver |
apiserver |
apiserver-7847c9d86c-6gjp8 |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 15s finished | |
openshift-apiserver |
apiserver |
apiserver-7847c9d86c-6gjp8 |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-7847c9d86c-6gjp8 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-7847c9d86c-6gjp8 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-network-diagnostics |
network-check-target-mgs54 |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-mgs54 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-apiserver |
apiserver-7c577f45d7-bp26v |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-7c577f45d7-bp26v |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-ovn-kubernetes |
ovnkube-node-tnm8w |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-tnm8w to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-apiserver |
apiserver |
apiserver-78d6c6c648-zwlsw |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-network-diagnostics |
network-check-target-8qg9z |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-8qg9z to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-apiserver |
apiserver |
apiserver-78d6c6c648-zwlsw |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver |
apiserver-78d6c6c648-zwlsw |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-78d6c6c648-zwlsw |
TerminationStoppedServing |
Server has stopped listening | |
openshift-multus |
multus-r82gp |
Scheduled |
Successfully assigned openshift-multus/multus-r82gp to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | ||
openshift-machine-config-operator |
machine-config-daemon-ctlcc |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-ctlcc to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | ||
openshift-apiserver |
apiserver |
apiserver-78d6c6c648-zwlsw |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 15s finished | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-bootstrap_d4cf3f1d-5e6a-491c-b70b-3b7f60bdb3cd became leader | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-bootstrap_6faeaa2a-c732-43cd-bc26-12bf300d08be became leader | |
kube-system |
cluster-policy-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster) | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-bootstrap_4a383752-7674-4a51-90ce-256769df7627 became leader | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-version namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-etcd namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for default namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for kube-node-lease namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for kube-public namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for kube-system namespace | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-bootstrap_fd5df8b5-a3b3-4535-8a00-678b8e7da1a8 became leader | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-e2e-loki namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-credential-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-operator namespace | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-bootstrap_f9e93acc-934c-4e31-a481-9564023c3620 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-bootstrap_ac67219e-2c36-4e1b-b6a2-6b2cb2fc126f became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-bootstrap_41942841-1435-45d8-846b-a2ef951535fd became leader | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-54b95d5d49 to 1 | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-bootstrap_e7d0256e-071a-467c-a7b0-6396dc350131 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.16.0-0.nightly-2024-06-10-211334" image="registry.build02.ci.openshift.org/ci-op-9xx71rvq/release@sha256:65102daae8065dffb1c67481ff030f5ad71eab5a7335d2498348a84cb5189074" | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.16.0-0.nightly-2024-06-10-211334" image="registry.build02.ci.openshift.org/ci-op-9xx71rvq/release@sha256:65102daae8065dffb1c67481ff030f5ad71eab5a7335d2498348a84cb5189074" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-insights namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-storage-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-network-config-controller namespace | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.16.0-0.nightly-2024-06-10-211334" image="registry.build02.ci.openshift.org/ci-op-9xx71rvq/release@sha256:65102daae8065dffb1c67481ff030f5ad71eab5a7335d2498348a84cb5189074" architecture="amd64" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-etcd-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-machine-approver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-authentication-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-node-tuning-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-csi-drivers namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-marketplace namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-network-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-image-registry namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-machine-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-dns-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-samples-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-openstack-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kni-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-operator-lifecycle-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-ovirt-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-vsphere-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-operators namespace | |
openshift-kube-controller-manager-operator |
deployment-controller |
kube-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set kube-controller-manager-operator-699c988f9d to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-nutanix-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-platform-infra namespace | |
| (x14) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-54b95d5d49 |
FailedCreate |
Error creating: pods "cluster-version-operator-54b95d5d49-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-network-operator |
deployment-controller |
network-operator |
ScalingReplicaSet |
Scaled up replica set network-operator-7cbf958795 to 1 | |
openshift-kube-storage-version-migrator-operator |
deployment-controller |
kube-storage-version-migrator-operator |
ScalingReplicaSet |
Scaled up replica set kube-storage-version-migrator-operator-7df985cbf9 to 1 | |
openshift-apiserver-operator |
deployment-controller |
openshift-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set openshift-apiserver-operator-5799f4fc64 to 1 | |
openshift-service-ca-operator |
deployment-controller |
service-ca-operator |
ScalingReplicaSet |
Scaled up replica set service-ca-operator-c8bf8fc99 to 1 | |
openshift-controller-manager-operator |
deployment-controller |
openshift-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set openshift-controller-manager-operator-76c7cdf7c8 to 1 | |
openshift-marketplace |
deployment-controller |
marketplace-operator |
ScalingReplicaSet |
Scaled up replica set marketplace-operator-867c6b6ccc to 1 | |
openshift-kube-scheduler-operator |
deployment-controller |
openshift-kube-scheduler-operator |
ScalingReplicaSet |
Scaled up replica set openshift-kube-scheduler-operator-7759655b55 to 1 | |
openshift-dns-operator |
deployment-controller |
dns-operator |
ScalingReplicaSet |
Scaled up replica set dns-operator-6897b57cbf to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-monitoring namespace | |
openshift-etcd-operator |
deployment-controller |
etcd-operator |
ScalingReplicaSet |
Scaled up replica set etcd-operator-67976f8796 to 1 | |
| (x2) | openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-user-workload-monitoring namespace | |
openshift-authentication-operator |
deployment-controller |
authentication-operator |
ScalingReplicaSet |
Scaled up replica set authentication-operator-5b9b5c7f89 to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-config-managed namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-config namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-machine-api namespace | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller-operator |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-operator-7f894469fd to 1 | |
openshift-cluster-node-tuning-operator |
deployment-controller |
cluster-node-tuning-operator |
ScalingReplicaSet |
Scaled up replica set cluster-node-tuning-operator-596f48f6bd to 1 | |
openshift-machine-config-operator |
deployment-controller |
machine-config-operator |
ScalingReplicaSet |
Scaled up replica set machine-config-operator-6d64fdfbc to 1 | |
openshift-monitoring |
deployment-controller |
cluster-monitoring-operator |
ScalingReplicaSet |
Scaled up replica set cluster-monitoring-operator-799db46f99 to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
package-server-manager |
ScalingReplicaSet |
Scaled up replica set package-server-manager-7c88c666f8 to 1 | |
openshift-ingress-operator |
deployment-controller |
ingress-operator |
ScalingReplicaSet |
Scaled up replica set ingress-operator-66bb9945d4 to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
olm-operator |
ScalingReplicaSet |
Scaled up replica set olm-operator-9958db496 to 1 | |
openshift-kube-apiserver-operator |
deployment-controller |
kube-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set kube-apiserver-operator-648fdc585 to 1 | |
openshift-image-registry |
deployment-controller |
cluster-image-registry-operator |
ScalingReplicaSet |
Scaled up replica set cluster-image-registry-operator-86c67755bb to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
catalog-operator |
ScalingReplicaSet |
Scaled up replica set catalog-operator-9d764bfb9 to 1 | |
openshift-machine-api |
deployment-controller |
cluster-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set cluster-baremetal-operator-6475c74794 to 1 | |
openshift-config-operator |
deployment-controller |
openshift-config-operator |
ScalingReplicaSet |
Scaled up replica set openshift-config-operator-5cd48fc5bd to 1 | |
openshift-insights |
deployment-controller |
insights-operator |
ScalingReplicaSet |
Scaled up replica set insights-operator-6c5c749b84 to 1 | |
openshift-cluster-storage-operator |
deployment-controller |
cluster-storage-operator |
ScalingReplicaSet |
Scaled up replica set cluster-storage-operator-74bf5c6c66 to 1 | |
openshift-machine-api |
deployment-controller |
machine-api-operator |
ScalingReplicaSet |
Scaled up replica set machine-api-operator-6f847dd5f5 to 1 | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled down replica set cluster-version-operator-54b95d5d49 to 0 from 1 | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-6fff9b89f6 to 1 | |
openshift-machine-api |
deployment-controller |
control-plane-machine-set-operator |
ScalingReplicaSet |
Scaled up replica set control-plane-machine-set-operator-7f9c9cfdd9 to 1 | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-c66f7ccb7 to 1 | |
openshift-cloud-credential-operator |
deployment-controller |
cloud-credential-operator |
ScalingReplicaSet |
Scaled up replica set cloud-credential-operator-7b984c96f7 to 1 | |
openshift-machine-api |
deployment-controller |
cluster-autoscaler-operator |
ScalingReplicaSet |
Scaled up replica set cluster-autoscaler-operator-fffbcbd5b to 1 | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-54fdff58dc to 1 | |
| (x14) | openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-c66f7ccb7 |
FailedCreate |
Error creating: pods "machine-approver-c66f7ccb7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-54fdff58dc |
FailedCreate |
Error creating: pods "cluster-cloud-controller-manager-operator-54fdff58dc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x15) | openshift-ingress-operator |
replicaset-controller |
ingress-operator-66bb9945d4 |
FailedCreate |
Error creating: pods "ingress-operator-66bb9945d4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x15) | openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-7c88c666f8 |
FailedCreate |
Error creating: pods "package-server-manager-7c88c666f8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x15) | openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-648fdc585 |
FailedCreate |
Error creating: pods "kube-apiserver-operator-648fdc585-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x15) | openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-9958db496 |
FailedCreate |
Error creating: pods "olm-operator-9958db496-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x15) | openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-86c67755bb |
FailedCreate |
Error creating: pods "cluster-image-registry-operator-86c67755bb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x15) | openshift-config-operator |
replicaset-controller |
openshift-config-operator-5cd48fc5bd |
FailedCreate |
Error creating: pods "openshift-config-operator-5cd48fc5bd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x15) | openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-6475c74794 |
FailedCreate |
Error creating: pods "cluster-baremetal-operator-6475c74794-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x15) | openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-9d764bfb9 |
FailedCreate |
Error creating: pods "catalog-operator-9d764bfb9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x15) | openshift-insights |
replicaset-controller |
insights-operator-6c5c749b84 |
FailedCreate |
Error creating: pods "insights-operator-6c5c749b84-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x15) | openshift-cluster-storage-operator |
replicaset-controller |
cluster-storage-operator-74bf5c6c66 |
FailedCreate |
Error creating: pods "cluster-storage-operator-74bf5c6c66-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x15) | openshift-machine-api |
replicaset-controller |
machine-api-operator-6f847dd5f5 |
FailedCreate |
Error creating: pods "machine-api-operator-6f847dd5f5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled down replica set machine-approver-c66f7ccb7 to 0 from 1 | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-8477dc5fd6 to 1 | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled down replica set cluster-cloud-controller-manager-operator-54fdff58dc to 0 from 1 | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-6cf975b6c8 to 1 | |
| (x15) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-6fff9b89f6 |
FailedCreate |
Error creating: pods "cluster-version-operator-6fff9b89f6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x15) | openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-7f9c9cfdd9 |
FailedCreate |
Error creating: pods "control-plane-machine-set-operator-7f9c9cfdd9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x15) | openshift-cloud-credential-operator |
replicaset-controller |
cloud-credential-operator-7b984c96f7 |
FailedCreate |
Error creating: pods "cloud-credential-operator-7b984c96f7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x15) | openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-fffbcbd5b |
FailedCreate |
Error creating: pods "cluster-autoscaler-operator-fffbcbd5b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-8477dc5fd6 |
FailedCreate |
Error creating: pods "machine-approver-8477dc5fd6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-6cf975b6c8 |
FailedCreate |
Error creating: pods "cluster-cloud-controller-manager-operator-6cf975b6c8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x16) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-699c988f9d |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-699c988f9d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x16) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-7df985cbf9 |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-7df985cbf9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x16) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-5799f4fc64 |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-5799f4fc64-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x16) | openshift-network-operator |
replicaset-controller |
network-operator-7cbf958795 |
FailedCreate |
Error creating: pods "network-operator-7cbf958795-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x16) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-c8bf8fc99 |
FailedCreate |
Error creating: pods "service-ca-operator-c8bf8fc99-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x16) | openshift-marketplace |
replicaset-controller |
marketplace-operator-867c6b6ccc |
FailedCreate |
Error creating: pods "marketplace-operator-867c6b6ccc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x16) | openshift-dns-operator |
replicaset-controller |
dns-operator-6897b57cbf |
FailedCreate |
Error creating: pods "dns-operator-6897b57cbf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x16) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-7759655b55 |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-7759655b55-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x16) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-76c7cdf7c8 |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-76c7cdf7c8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x16) | openshift-etcd-operator |
replicaset-controller |
etcd-operator-67976f8796 |
FailedCreate |
Error creating: pods "etcd-operator-67976f8796-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x16) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-5b9b5c7f89 |
FailedCreate |
Error creating: pods "authentication-operator-5b9b5c7f89-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container setup | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container setup | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" in 2.563s (2.563s including waiting) | |
| (x16) | openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-7f894469fd |
FailedCreate |
Error creating: pods "csi-snapshot-controller-operator-7f894469fd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x16) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-799db46f99 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-799db46f99-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x16) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-596f48f6bd |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-596f48f6bd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x16) | openshift-machine-config-operator |
replicaset-controller |
machine-config-operator-6d64fdfbc |
FailedCreate |
Error creating: pods "machine-config-operator-6d64fdfbc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-kube-apiserver |
kubelet |
apiserver-watcher-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06cb5faab03003ec68dedbb23fbbdef0c98eb80ba70affedb7703df613ca31ac" already present on machine | |
openshift-kube-apiserver |
kubelet |
apiserver-watcher-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container apiserver-watcher | |
openshift-kube-apiserver |
kubelet |
apiserver-watcher-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container apiserver-watcher | |
openshift-operator-lifecycle-manager |
default-scheduler |
package-server-manager-7c88c666f8-r2wz4 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-ingress-operator |
replicaset-controller |
ingress-operator-66bb9945d4 |
SuccessfulCreate |
Created pod: ingress-operator-66bb9945d4-25hsj | |
openshift-ingress-operator |
default-scheduler |
ingress-operator-66bb9945d4-25hsj |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-7c88c666f8 |
SuccessfulCreate |
Created pod: package-server-manager-7c88c666f8-r2wz4 | |
openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-648fdc585 |
SuccessfulCreate |
Created pod: kube-apiserver-operator-648fdc585-xghvk | |
openshift-kube-apiserver-operator |
default-scheduler |
kube-apiserver-operator-648fdc585-xghvk |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-9958db496 |
SuccessfulCreate |
Created pod: olm-operator-9958db496-pgws2 | |
openshift-image-registry |
default-scheduler |
cluster-image-registry-operator-86c67755bb-2b7lz |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
default-scheduler |
olm-operator-9958db496-pgws2 |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-86c67755bb |
SuccessfulCreate |
Created pod: cluster-image-registry-operator-86c67755bb-2b7lz | |
openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-6475c74794 |
SuccessfulCreate |
Created pod: cluster-baremetal-operator-6475c74794-8hd5r | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-2 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-2 in Controller | |
openshift-operator-lifecycle-manager |
default-scheduler |
catalog-operator-9d764bfb9-w5dr5 |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-config-operator |
replicaset-controller |
openshift-config-operator-5cd48fc5bd |
SuccessfulCreate |
Created pod: openshift-config-operator-5cd48fc5bd-w9jqv | |
openshift-machine-api |
default-scheduler |
cluster-baremetal-operator-6475c74794-8hd5r |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-config-operator |
default-scheduler |
openshift-config-operator-5cd48fc5bd-w9jqv |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-9d764bfb9 |
SuccessfulCreate |
Created pod: catalog-operator-9d764bfb9-w5dr5 | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-1 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-1 in Controller | |
openshift-insights |
replicaset-controller |
insights-operator-6c5c749b84 |
SuccessfulCreate |
Created pod: insights-operator-6c5c749b84-s7zkf | |
openshift-insights |
default-scheduler |
insights-operator-6c5c749b84-s7zkf |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-cluster-storage-operator |
replicaset-controller |
cluster-storage-operator-74bf5c6c66 |
SuccessfulCreate |
Created pod: cluster-storage-operator-74bf5c6c66-mlzgt | |
openshift-cluster-storage-operator |
default-scheduler |
cluster-storage-operator-74bf5c6c66-mlzgt |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-machine-api |
replicaset-controller |
machine-api-operator-6f847dd5f5 |
SuccessfulCreate |
Created pod: machine-api-operator-6f847dd5f5-wqkzk | |
openshift-machine-api |
default-scheduler |
machine-api-operator-6f847dd5f5-wqkzk |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-8477dc5fd6 |
SuccessfulCreate |
Created pod: machine-approver-8477dc5fd6-82ddm | |
openshift-cluster-machine-approver |
default-scheduler |
machine-approver-8477dc5fd6-82ddm |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-cloud-controller-manager-operator |
default-scheduler |
cluster-cloud-controller-manager-operator-6cf975b6c8-zdsgh |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6cf975b6c8-zdsgh to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-0 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-0 in Controller | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-6cf975b6c8 |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-6cf975b6c8-zdsgh | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6cf975b6c8-zdsgh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-6fff9b89f6 |
SuccessfulCreate |
Created pod: cluster-version-operator-6fff9b89f6-zgszm | |
openshift-cluster-version |
default-scheduler |
cluster-version-operator-6fff9b89f6-zgszm |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-7f9c9cfdd9 |
SuccessfulCreate |
Created pod: control-plane-machine-set-operator-7f9c9cfdd9-6d8wg | |
openshift-machine-api |
default-scheduler |
control-plane-machine-set-operator-7f9c9cfdd9-6d8wg |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-cloud-credential-operator |
replicaset-controller |
cloud-credential-operator-7b984c96f7 |
SuccessfulCreate |
Created pod: cloud-credential-operator-7b984c96f7-zjwpp | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6cf975b6c8-zdsgh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" in 8.758s (8.759s including waiting) | |
openshift-cloud-credential-operator |
default-scheduler |
cloud-credential-operator-7b984c96f7-zjwpp |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-cloud-controller-manager |
default-scheduler |
azure-cloud-controller-manager-7bbb74ffdd-pf9st |
Scheduled |
Successfully assigned openshift-cloud-controller-manager/azure-cloud-controller-manager-7bbb74ffdd-pf9st to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6cf975b6c8-zdsgh |
Created |
Created container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6cf975b6c8-zdsgh |
Started |
Started container config-sync-controllers | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
azure-cloud-node-manager |
ResourceCreateSuccess |
Resource was successfully created | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6cf975b6c8-zdsgh |
Created |
Created container config-sync-controllers | |
openshift-cloud-controller-manager-operator |
ci-op-9xx71rvq-1e28e-w667k-master-1_2c5502a8-5421-4a2e-b85e-a5619ac83e8d |
cluster-cloud-controller-manager-leader |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-1_2c5502a8-5421-4a2e-b85e-a5619ac83e8d became leader | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6cf975b6c8-zdsgh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6cf975b6c8-zdsgh |
Started |
Started container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager |
default-scheduler |
azure-cloud-controller-manager-7bbb74ffdd-c5sr4 |
Scheduled |
Successfully assigned openshift-cloud-controller-manager/azure-cloud-controller-manager-7bbb74ffdd-c5sr4 to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-cloud-controller-manager |
replicaset-controller |
azure-cloud-controller-manager-7bbb74ffdd |
SuccessfulCreate |
Created pod: azure-cloud-controller-manager-7bbb74ffdd-pf9st | |
openshift-cloud-controller-manager |
replicaset-controller |
azure-cloud-controller-manager-7bbb74ffdd |
SuccessfulCreate |
Created pod: azure-cloud-controller-manager-7bbb74ffdd-c5sr4 | |
openshift-cloud-controller-manager-operator |
ci-op-9xx71rvq-1e28e-w667k-master-1_911112a2-5fe1-4828-825a-fed780e49c44 |
cluster-cloud-config-sync-leader |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-1_911112a2-5fe1-4828-825a-fed780e49c44 became leader | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
azure-cloud-controller-manager |
ResourceCreateSuccess |
Resource was successfully created | |
openshift-cloud-controller-manager |
deployment-controller |
azure-cloud-controller-manager |
ScalingReplicaSet |
Scaled up replica set azure-cloud-controller-manager-7bbb74ffdd to 2 | |
openshift-cloud-controller-manager |
daemonset-controller |
azure-cloud-node-manager |
SuccessfulCreate |
Created pod: azure-cloud-node-manager-mks6q | |
default |
cloud-controller-manager-operator |
azure-cloud-controller-manager |
ResourceCreateSuccess |
Resource was successfully created | |
openshift-cloud-controller-manager |
default-scheduler |
azure-cloud-node-manager-mks6q |
Scheduled |
Successfully assigned openshift-cloud-controller-manager/azure-cloud-node-manager-mks6q to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
| (x4) | openshift-cloud-controller-manager |
cloud-controller-manager-operator |
azure-cloud-controller-manager |
ConfigurationCheckFailed |
error calculating configuration hash: Secret "azure-cloud-credentials" not found |
| (x4) | openshift-cloud-controller-manager |
cloud-controller-manager-operator |
azure-cloud-node-manager |
ConfigurationCheckFailed |
error calculating configuration hash: Secret "azure-cloud-credentials" not found |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
azure-cloud-controller-manager |
ResourceCreateSuccess |
Resource was successfully created | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-mks6q |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" | |
openshift-cloud-controller-manager |
default-scheduler |
azure-cloud-node-manager-922gk |
Scheduled |
Successfully assigned openshift-cloud-controller-manager/azure-cloud-node-manager-922gk to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-cloud-controller-manager |
default-scheduler |
azure-cloud-node-manager-chdv2 |
Scheduled |
Successfully assigned openshift-cloud-controller-manager/azure-cloud-node-manager-chdv2 to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-cloud-controller-manager |
daemonset-controller |
azure-cloud-node-manager |
SuccessfulCreate |
Created pod: azure-cloud-node-manager-chdv2 | |
openshift-cloud-controller-manager |
daemonset-controller |
azure-cloud-node-manager |
SuccessfulCreate |
Created pod: azure-cloud-node-manager-922gk | |
default |
cloud-controller-manager-operator |
cloud-controller-manager:azure-cloud-controller-manager |
ResourceCreateSuccess |
Resource was successfully created | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-rbac-proxy-crio |
openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-fffbcbd5b |
SuccessfulCreate |
Created pod: cluster-autoscaler-operator-fffbcbd5b-hpsfj | |
openshift-machine-api |
default-scheduler |
cluster-autoscaler-operator-fffbcbd5b-hpsfj |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-rbac-proxy-crio |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-mks6q |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" in 3.408s (3.408s including waiting) | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-mks6q |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
| (x2) | openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-mks6q |
Failed |
Error: secret "azure-cloud-credentials" not found |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-7bbb74ffdd-pf9st |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-922gk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-922gk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" in 4.081s (4.081s including waiting) | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-7bbb74ffdd-pf9st |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" in 4.05s (4.05s including waiting) | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-922gk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
| (x2) | openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-922gk |
Failed |
Error: secret "azure-cloud-credentials" not found |
| (x2) | openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-7bbb74ffdd-pf9st |
Failed |
Error: secret "azure-cloud-credentials" not found |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-7bbb74ffdd-pf9st |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container kube-rbac-proxy-crio |
| (x3) | openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-7bbb74ffdd-c5sr4 |
Failed |
Error: secret "azure-cloud-credentials" not found |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container kube-rbac-proxy-crio |
| (x3) | openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-7bbb74ffdd-c5sr4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine |
| (x3) | openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-chdv2 |
Failed |
Error: secret "azure-cloud-credentials" not found |
| (x3) | openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-chdv2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine |
openshift-cloud-controller-manager |
daemonset-controller |
azure-cloud-node-manager |
SuccessfulDelete |
Deleted pod: azure-cloud-node-manager-chdv2 | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
azure-cloud-node-manager |
ResourceUpdateSuccess |
Resource was successfully updated | |
openshift-cloud-controller-manager |
replicaset-controller |
azure-cloud-controller-manager-7bbb74ffdd |
SuccessfulDelete |
Deleted pod: azure-cloud-controller-manager-7bbb74ffdd-pf9st | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-rbac-proxy-crio |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-rbac-proxy-crio |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine |
openshift-cloud-controller-manager |
replicaset-controller |
azure-cloud-controller-manager-7bbb74ffdd |
SuccessfulDelete |
Deleted pod: azure-cloud-controller-manager-7bbb74ffdd-c5sr4 | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
azure-cloud-controller-manager |
ResourceUpdateSuccess |
Resource was successfully updated | |
openshift-cloud-controller-manager |
deployment-controller |
azure-cloud-controller-manager |
ScalingReplicaSet |
Scaled down replica set azure-cloud-controller-manager-7bbb74ffdd to 0 from 2 | |
openshift-cloud-controller-manager |
daemonset-controller |
azure-cloud-node-manager |
SuccessfulDelete |
Deleted pod: azure-cloud-node-manager-922gk | |
openshift-cloud-controller-manager |
daemonset-controller |
azure-cloud-node-manager |
SuccessfulDelete |
Deleted pod: azure-cloud-node-manager-mks6q | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-njdg9 |
Started |
Started container azure-inject-credentials | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-ccfbdcbbd-qd4nl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-cloud-controller-manager |
replicaset-controller |
azure-cloud-controller-manager-ccfbdcbbd |
SuccessfulCreate |
Created pod: azure-cloud-controller-manager-ccfbdcbbd-qd4nl | |
openshift-cloud-controller-manager |
default-scheduler |
azure-cloud-node-manager-njdg9 |
Scheduled |
Successfully assigned openshift-cloud-controller-manager/azure-cloud-node-manager-njdg9 to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-cloud-controller-manager |
default-scheduler |
azure-cloud-controller-manager-ccfbdcbbd-dxwmk |
Scheduled |
Successfully assigned openshift-cloud-controller-manager/azure-cloud-controller-manager-ccfbdcbbd-dxwmk to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-cloud-controller-manager |
daemonset-controller |
azure-cloud-node-manager |
SuccessfulCreate |
Created pod: azure-cloud-node-manager-l4xmv | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-l4xmv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-l4xmv |
Created |
Created container azure-inject-credentials | |
openshift-cloud-controller-manager |
daemonset-controller |
azure-cloud-node-manager |
SuccessfulCreate |
Created pod: azure-cloud-node-manager-njdg9 | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-l4xmv |
Started |
Started container azure-inject-credentials | |
openshift-cloud-controller-manager |
daemonset-controller |
azure-cloud-node-manager |
SuccessfulCreate |
Created pod: azure-cloud-node-manager-2hm2r | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-njdg9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-njdg9 |
Created |
Created container azure-inject-credentials | |
openshift-cloud-controller-manager |
replicaset-controller |
azure-cloud-controller-manager-ccfbdcbbd |
SuccessfulCreate |
Created pod: azure-cloud-controller-manager-ccfbdcbbd-dxwmk | |
openshift-cloud-controller-manager |
default-scheduler |
azure-cloud-controller-manager-ccfbdcbbd-qd4nl |
Scheduled |
Successfully assigned openshift-cloud-controller-manager/azure-cloud-controller-manager-ccfbdcbbd-qd4nl to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-cloud-controller-manager |
default-scheduler |
azure-cloud-node-manager-l4xmv |
Scheduled |
Successfully assigned openshift-cloud-controller-manager/azure-cloud-node-manager-l4xmv to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
| (x2) | openshift-cloud-controller-manager |
controllermanager |
azure-cloud-controller-manager |
NoPods |
No matching pods found |
openshift-cloud-controller-manager |
deployment-controller |
azure-cloud-controller-manager |
ScalingReplicaSet |
Scaled up replica set azure-cloud-controller-manager-ccfbdcbbd to 2 | |
openshift-cloud-controller-manager |
default-scheduler |
azure-cloud-node-manager-2hm2r |
Scheduled |
Successfully assigned openshift-cloud-controller-manager/azure-cloud-node-manager-2hm2r to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-2hm2r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-ccfbdcbbd-dxwmk |
Created |
Created container azure-inject-credentials | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-njdg9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fadc119bc4c8e630b76b0df84e31adb20b5484dcaf8495d0edcfe4288f414546" | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-2hm2r |
Created |
Created container azure-inject-credentials | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-l4xmv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fadc119bc4c8e630b76b0df84e31adb20b5484dcaf8495d0edcfe4288f414546" | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-2hm2r |
Started |
Started container azure-inject-credentials | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-2hm2r |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fadc119bc4c8e630b76b0df84e31adb20b5484dcaf8495d0edcfe4288f414546" | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-ccfbdcbbd-dxwmk |
Started |
Started container azure-inject-credentials | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-ccfbdcbbd-dxwmk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-ccfbdcbbd-qd4nl |
Started |
Started container azure-inject-credentials | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-ccfbdcbbd-qd4nl |
Created |
Created container azure-inject-credentials | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-ccfbdcbbd-qd4nl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ce4fcfefebbc59a93cb599fbecd9dfdc61aca056610ba34247b5c8e1934dfaa" | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-ccfbdcbbd-dxwmk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ce4fcfefebbc59a93cb599fbecd9dfdc61aca056610ba34247b5c8e1934dfaa" | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-njdg9 |
Created |
Created container cloud-node-manager | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-l4xmv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fadc119bc4c8e630b76b0df84e31adb20b5484dcaf8495d0edcfe4288f414546" in 2.852s (2.852s including waiting) | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-2hm2r |
Started |
Started container cloud-node-manager | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-l4xmv |
Started |
Started container cloud-node-manager | |
openshift-cluster-version |
default-scheduler |
cluster-version-operator-6fff9b89f6-zgszm |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-6fff9b89f6-zgszm to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-2hm2r |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fadc119bc4c8e630b76b0df84e31adb20b5484dcaf8495d0edcfe4288f414546" in 3.146s (3.146s including waiting) | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-njdg9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fadc119bc4c8e630b76b0df84e31adb20b5484dcaf8495d0edcfe4288f414546" in 3.332s (3.332s including waiting) | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-2hm2r |
Created |
Created container cloud-node-manager | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-l4xmv |
Created |
Created container cloud-node-manager | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-njdg9 |
Started |
Started container cloud-node-manager | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-ccfbdcbbd-qd4nl |
Created |
Created container cloud-controller-manager | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-ccfbdcbbd-dxwmk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ce4fcfefebbc59a93cb599fbecd9dfdc61aca056610ba34247b5c8e1934dfaa" in 3.234s (3.234s including waiting) | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-ccfbdcbbd-qd4nl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ce4fcfefebbc59a93cb599fbecd9dfdc61aca056610ba34247b5c8e1934dfaa" in 3.063s (3.063s including waiting) | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-ccfbdcbbd-qd4nl |
Started |
Started container cloud-controller-manager | |
openshift-cloud-controller-manager |
cloud-controller-manager |
cloud-controller-manager |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-0_a445ad5c-51d1-41d0-9c03-53d8672929e8 became leader | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-28635045 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28635045 |
SuccessfulCreate |
Created pod: collect-profiles-28635045-pspjp | |
| (x8) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-1 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-1_openshift-machine-config-operator(f6c952e46885c54268a34414bd405690) |
| (x8) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-2 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-2_openshift-machine-config-operator(921dc3333bbc4b6c2e5b577d2fd67536) |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6cf975b6c8-zdsgh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6cf975b6c8-zdsgh |
Created |
Created container kube-rbac-proxy |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6cf975b6c8-zdsgh |
Started |
Started container kube-rbac-proxy |
| (x5) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine |
| (x9) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-0 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-0_openshift-machine-config-operator(edcf8a6ae0478e0309b67e2fa77ecaa4) |
| (x5) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine |
openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-699c988f9d |
SuccessfulCreate |
Created pod: kube-controller-manager-operator-699c988f9d-nkb7r | |
openshift-kube-controller-manager-operator |
default-scheduler |
kube-controller-manager-operator-699c988f9d-nkb7r |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-7df985cbf9 |
SuccessfulCreate |
Created pod: kube-storage-version-migrator-operator-7df985cbf9-f4swj | |
openshift-network-operator |
replicaset-controller |
network-operator-7cbf958795 |
SuccessfulCreate |
Created pod: network-operator-7cbf958795-pszp8 | |
openshift-apiserver-operator |
default-scheduler |
openshift-apiserver-operator-5799f4fc64-s48zf |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-network-operator |
default-scheduler |
network-operator-7cbf958795-pszp8 |
Scheduled |
Successfully assigned openshift-network-operator/network-operator-7cbf958795-pszp8 to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-dns-operator |
replicaset-controller |
dns-operator-6897b57cbf |
SuccessfulCreate |
Created pod: dns-operator-6897b57cbf-6t6wl | |
openshift-dns-operator |
default-scheduler |
dns-operator-6897b57cbf-6t6wl |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-5799f4fc64 |
SuccessfulCreate |
Created pod: openshift-apiserver-operator-5799f4fc64-s48zf | |
openshift-network-operator |
kubelet |
network-operator-7cbf958795-pszp8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:74a88136c1f22a00a7ffee265c05f3e0101ba89a3b297e2027fcc9d53230b6a1" | |
openshift-kube-storage-version-migrator-operator |
default-scheduler |
kube-storage-version-migrator-operator-7df985cbf9-f4swj |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-7759655b55 |
SuccessfulCreate |
Created pod: openshift-kube-scheduler-operator-7759655b55-g5bc2 | |
openshift-controller-manager-operator |
default-scheduler |
openshift-controller-manager-operator-76c7cdf7c8-mtp8c |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-marketplace |
replicaset-controller |
marketplace-operator-867c6b6ccc |
SuccessfulCreate |
Created pod: marketplace-operator-867c6b6ccc-rmltl | |
openshift-kube-scheduler-operator |
default-scheduler |
openshift-kube-scheduler-operator-7759655b55-g5bc2 |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-marketplace |
default-scheduler |
marketplace-operator-867c6b6ccc-rmltl |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-etcd-operator |
default-scheduler |
etcd-operator-67976f8796-p7shh |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-c8bf8fc99 |
SuccessfulCreate |
Created pod: service-ca-operator-c8bf8fc99-cjm9q | |
openshift-service-ca-operator |
default-scheduler |
service-ca-operator-c8bf8fc99-cjm9q |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-76c7cdf7c8 |
SuccessfulCreate |
Created pod: openshift-controller-manager-operator-76c7cdf7c8-mtp8c | |
openshift-etcd-operator |
replicaset-controller |
etcd-operator-67976f8796 |
SuccessfulCreate |
Created pod: etcd-operator-67976f8796-p7shh | |
openshift-authentication-operator |
default-scheduler |
authentication-operator-5b9b5c7f89-z28dx |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-authentication-operator |
replicaset-controller |
authentication-operator-5b9b5c7f89 |
SuccessfulCreate |
Created pod: authentication-operator-5b9b5c7f89-z28dx | |
openshift-network-operator |
kubelet |
network-operator-7cbf958795-pszp8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:74a88136c1f22a00a7ffee265c05f3e0101ba89a3b297e2027fcc9d53230b6a1" in 4.088s (4.088s including waiting) | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-2_a6b79644-7754-4921-af11-c88f14394821 became leader | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-network-operator |
kubelet |
mtu-prober-6l9kj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:74a88136c1f22a00a7ffee265c05f3e0101ba89a3b297e2027fcc9d53230b6a1" already present on machine | |
openshift-network-operator |
default-scheduler |
mtu-prober-6l9kj |
Scheduled |
Successfully assigned openshift-network-operator/mtu-prober-6l9kj to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-network-operator |
job-controller |
mtu-prober |
SuccessfulCreate |
Created pod: mtu-prober-6l9kj | |
openshift-network-operator |
kubelet |
mtu-prober-6l9kj |
Started |
Started container prober | |
openshift-network-operator |
kubelet |
mtu-prober-6l9kj |
Created |
Created container prober | |
openshift-network-operator |
job-controller |
mtu-prober |
Completed |
Job completed | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-7f894469fd |
SuccessfulCreate |
Created pod: csi-snapshot-controller-operator-7f894469fd-mcfdd | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-operator-7f894469fd-mcfdd |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-596f48f6bd |
SuccessfulCreate |
Created pod: cluster-node-tuning-operator-596f48f6bd-s4v8t | |
openshift-machine-config-operator |
default-scheduler |
machine-config-operator-6d64fdfbc-xtlls |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-799db46f99-r6f42 |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-596f48f6bd-s4v8t |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-operator-6d64fdfbc |
SuccessfulCreate |
Created pod: machine-config-operator-6d64fdfbc-xtlls | |
openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-799db46f99 |
SuccessfulCreate |
Created pod: cluster-monitoring-operator-799db46f99-r6f42 | |
openshift-cloud-network-config-controller |
replicaset-controller |
cloud-network-config-controller-56cffd86cf |
SuccessfulCreate |
Created pod: cloud-network-config-controller-56cffd86cf-c4tcz | |
openshift-cloud-network-config-controller |
default-scheduler |
cloud-network-config-controller-56cffd86cf-c4tcz |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-cloud-network-config-controller |
deployment-controller |
cloud-network-config-controller |
ScalingReplicaSet |
Scaled up replica set cloud-network-config-controller-56cffd86cf to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-multus namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b09fd8c20080a440e2fb91e64deed04b5a8678296f0376dfa2f2908941b5309a" | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-g4gxd | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-nr9x6 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b09fd8c20080a440e2fb91e64deed04b5a8678296f0376dfa2f2908941b5309a" | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-mpntt | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-78bcs | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-xj48s | |
openshift-multus |
default-scheduler |
multus-nr9x6 |
Scheduled |
Successfully assigned openshift-multus/multus-nr9x6 to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-tz2qd | |
openshift-multus |
default-scheduler |
multus-additional-cni-plugins-mpntt |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-mpntt to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-multus |
default-scheduler |
multus-additional-cni-plugins-78bcs |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-78bcs to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b09fd8c20080a440e2fb91e64deed04b5a8678296f0376dfa2f2908941b5309a" | |
openshift-multus |
kubelet |
multus-tz2qd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" | |
openshift-multus |
kubelet |
multus-g4gxd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" | |
openshift-multus |
default-scheduler |
multus-g4gxd |
Scheduled |
Successfully assigned openshift-multus/multus-g4gxd to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-multus |
kubelet |
multus-nr9x6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" | |
openshift-multus |
default-scheduler |
multus-tz2qd |
Scheduled |
Successfully assigned openshift-multus/multus-tz2qd to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-multus |
default-scheduler |
multus-additional-cni-plugins-xj48s |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-xj48s to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-multus |
default-scheduler |
network-metrics-daemon-tqqbv |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-tqqbv to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-multus |
default-scheduler |
network-metrics-daemon-jttv4 |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-jttv4 to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-bh74v | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-tqqbv | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-jttv4 | |
openshift-multus |
default-scheduler |
network-metrics-daemon-bh74v |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-bh74v to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Created |
Created container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b09fd8c20080a440e2fb91e64deed04b5a8678296f0376dfa2f2908941b5309a" in 3.303s (3.303s including waiting) | |
openshift-multus |
default-scheduler |
multus-admission-controller-6fc7977fb-4v6xp |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-6fc7977fb to 2 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-6fc7977fb |
SuccessfulCreate |
Created pod: multus-admission-controller-6fc7977fb-4v6xp | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b09fd8c20080a440e2fb91e64deed04b5a8678296f0376dfa2f2908941b5309a" in 3.028s (3.028s including waiting) | |
openshift-multus |
default-scheduler |
multus-admission-controller-6fc7977fb-zpcvg |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-multus |
replicaset-controller |
multus-admission-controller-6fc7977fb |
SuccessfulCreate |
Created pod: multus-admission-controller-6fc7977fb-zpcvg | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:443e172a5bba1222249dea114b13e2df0d1b0f7992ef3b774723c8aec78bb522" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Created |
Created container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:443e172a5bba1222249dea114b13e2df0d1b0f7992ef3b774723c8aec78bb522" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Created |
Created container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b09fd8c20080a440e2fb91e64deed04b5a8678296f0376dfa2f2908941b5309a" in 3.599s (3.599s including waiting) | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-ovn-kubernetes namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:443e172a5bba1222249dea114b13e2df0d1b0f7992ef3b774723c8aec78bb522" | |
| (x9) | openshift-cluster-version |
kubelet |
cluster-version-operator-6fff9b89f6-zgszm |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-bb9sk | |
openshift-ovn-kubernetes |
replicaset-controller |
ovnkube-control-plane-5df5bbb869 |
SuccessfulCreate |
Created pod: ovnkube-control-plane-5df5bbb869-x5nhm | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-control-plane-5df5bbb869-7dsfz |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-5df5bbb869-7dsfz to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-control-plane-5df5bbb869-x5nhm |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-5df5bbb869-x5nhm to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-l9mpr | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-bb9sk |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-bb9sk to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-network-diagnostics namespace | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-5vwc5 |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-5vwc5 to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-l9mpr |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-l9mpr to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-5vwc5 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-host-network namespace | |
openshift-ovn-kubernetes |
replicaset-controller |
ovnkube-control-plane-5df5bbb869 |
SuccessfulCreate |
Created pod: ovnkube-control-plane-5df5bbb869-7dsfz | |
openshift-ovn-kubernetes |
deployment-controller |
ovnkube-control-plane |
ScalingReplicaSet |
Scaled up replica set ovnkube-control-plane-5df5bbb869 to 2 | |
openshift-network-diagnostics |
replicaset-controller |
network-check-source-775df55c85 |
SuccessfulCreate |
Created pod: network-check-source-775df55c85-86pxw | |
openshift-network-diagnostics |
deployment-controller |
network-check-source |
ScalingReplicaSet |
Scaled up replica set network-check-source-775df55c85 to 1 | |
openshift-multus |
kubelet |
multus-g4gxd |
Created |
Created container kube-multus | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Started |
Started container cni-plugins | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-q4hxn | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" | |
openshift-multus |
kubelet |
multus-g4gxd |
Started |
Started container kube-multus | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5bbb869-x5nhm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5bbb869-x5nhm |
Started |
Started container kube-rbac-proxy | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-fmdsm | |
openshift-network-diagnostics |
default-scheduler |
network-check-target-fmdsm |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-fmdsm to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-multus |
kubelet |
multus-g4gxd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" in 14.061s (14.061s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Created |
Created container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:443e172a5bba1222249dea114b13e2df0d1b0f7992ef3b774723c8aec78bb522" in 10.09s (10.09s including waiting) | |
openshift-network-diagnostics |
default-scheduler |
network-check-target-mcbft |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-mcbft to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5bbb869-x5nhm |
Created |
Created container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5bbb869-x5nhm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-network-diagnostics |
default-scheduler |
network-check-target-q4hxn |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-q4hxn to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-mcbft | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-network-node-identity namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492aea82e8accb6e690e9251e98bf5592433f92ca4d3df9bcad7af44a482559d" | |
openshift-network-node-identity |
daemonset-controller |
network-node-identity |
SuccessfulCreate |
Created pod: network-node-identity-9q57h | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" | |
openshift-network-node-identity |
default-scheduler |
network-node-identity-9q57h |
Scheduled |
Successfully assigned openshift-network-node-identity/network-node-identity-9q57h to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:443e172a5bba1222249dea114b13e2df0d1b0f7992ef3b774723c8aec78bb522" in 12.059s (12.059s including waiting) | |
openshift-network-node-identity |
daemonset-controller |
network-node-identity |
SuccessfulCreate |
Created pod: network-node-identity-gs6c8 | |
openshift-network-node-identity |
daemonset-controller |
network-node-identity |
SuccessfulCreate |
Created pod: network-node-identity-xl5tj | |
openshift-network-node-identity |
default-scheduler |
network-node-identity-xl5tj |
Scheduled |
Successfully assigned openshift-network-node-identity/network-node-identity-xl5tj to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-multus |
kubelet |
multus-tz2qd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" in 16.513s (16.513s including waiting) | |
openshift-network-node-identity |
default-scheduler |
network-node-identity-gs6c8 |
Scheduled |
Successfully assigned openshift-network-node-identity/network-node-identity-gs6c8 to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-network-node-identity |
kubelet |
network-node-identity-9q57h |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-tz2qd |
Started |
Started container kube-multus | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492aea82e8accb6e690e9251e98bf5592433f92ca4d3df9bcad7af44a482559d" | |
openshift-network-node-identity |
kubelet |
network-node-identity-xl5tj |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : secret "network-node-identity-cert" not found | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Created |
Created container cni-plugins | |
openshift-multus |
kubelet |
multus-tz2qd |
Created |
Created container kube-multus | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:443e172a5bba1222249dea114b13e2df0d1b0f7992ef3b774723c8aec78bb522" in 13.735s (13.735s including waiting) | |
openshift-multus |
kubelet |
multus-nr9x6 |
Started |
Started container kube-multus | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-bb9sk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-nr9x6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" in 18.726s (18.726s including waiting) | |
openshift-multus |
kubelet |
multus-nr9x6 |
Created |
Created container kube-multus | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5bbb869-7dsfz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5bbb869-7dsfz |
Created |
Created container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5bbb869-7dsfz |
Started |
Started container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5bbb869-7dsfz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" | |
openshift-network-node-identity |
kubelet |
network-node-identity-gs6c8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Created |
Created container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492aea82e8accb6e690e9251e98bf5592433f92ca4d3df9bcad7af44a482559d" | |
openshift-network-node-identity |
kubelet |
network-node-identity-xl5tj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Created |
Created container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Created |
Created container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492aea82e8accb6e690e9251e98bf5592433f92ca4d3df9bcad7af44a482559d" in 3.167s (3.167s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492aea82e8accb6e690e9251e98bf5592433f92ca4d3df9bcad7af44a482559d" in 7.881s (7.881s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68a95a354a5bb6c5312ebd4670ae305b8bf0123ed426048ed5befcbfeeff3fda" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Created |
Created container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68a95a354a5bb6c5312ebd4670ae305b8bf0123ed426048ed5befcbfeeff3fda" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68a95a354a5bb6c5312ebd4670ae305b8bf0123ed426048ed5befcbfeeff3fda" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492aea82e8accb6e690e9251e98bf5592433f92ca4d3df9bcad7af44a482559d" in 6.351s (6.351s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Created |
Created container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68a95a354a5bb6c5312ebd4670ae305b8bf0123ed426048ed5befcbfeeff3fda" in 1.841s (1.841s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Created |
Created container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68a95a354a5bb6c5312ebd4670ae305b8bf0123ed426048ed5befcbfeeff3fda" in 1.909s (1.909s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" in 14.055s (14.055s including waiting) | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Created |
Created container kubecfg-setup | |
openshift-ovn-kubernetes |
controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-5df5bbb869-x5nhm became leader | |
openshift-network-node-identity |
ci-op-9xx71rvq-1e28e-w667k-master-2_c21ef11a-abb0-4c8c-8eee-f4044ed4c02a |
ovnkube-identity |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-2_c21ef11a-abb0-4c8c-8eee-f4044ed4c02a became leader | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5bbb869-x5nhm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" in 13.903s (13.903s including waiting) | |
openshift-network-node-identity |
kubelet |
network-node-identity-xl5tj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" in 5.28s (5.28s including waiting) | |
openshift-network-node-identity |
kubelet |
network-node-identity-xl5tj |
Created |
Created container webhook | |
openshift-network-node-identity |
kubelet |
network-node-identity-xl5tj |
Started |
Started container webhook | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Created |
Created container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Created |
Created container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Created |
Created container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68a95a354a5bb6c5312ebd4670ae305b8bf0123ed426048ed5befcbfeeff3fda" in 4.642s (4.642s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Created |
Created container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Created |
Created container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Created |
Created container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" in 15.973s (15.973s including waiting) | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Created |
Created container sbdb | |
openshift-network-node-identity |
kubelet |
network-node-identity-9q57h |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" in 15.664s (15.664s including waiting) | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-l9mpr |
Started |
Started container sbdb | |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-jttv4 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Started |
Started container kubecfg-setup | |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-bh74v |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
openshift-network-node-identity |
kubelet |
network-node-identity-9q57h |
Created |
Created container webhook | |
openshift-network-node-identity |
kubelet |
network-node-identity-9q57h |
Started |
Started container webhook | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Created |
Created container kubecfg-setup | |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-tqqbv |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Created |
Created container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Created |
Created container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-tqqbv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Created |
Created container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Created |
Created container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Created |
Created container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Started |
Started container ovn-acl-logging | |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-jttv4 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-bh74v |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
default |
ovnkube-csr-approver-controller |
csr-57zxb |
CSRApproved |
CSR "csr-57zxb" has been approved | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulDelete |
Deleted pod: ovnkube-node-l9mpr | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulDelete |
Deleted pod: ovnkube-node-bb9sk | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulDelete |
Deleted pod: ovnkube-node-5vwc5 | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-vgfpv |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-vgfpv to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Created |
Created container whereabouts-cni-bincopy | |
default |
controlplane |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
ErrorAddingResource |
[cannot allocate hybrid overlay distributed router ip for nodes until all initial pods are processed, failed to set up hybrid overlay logical switch port for ci-op-9xx71rvq-1e28e-w667k-master-2: cannot set up hybrid overlay ports, distributed router ip is nil] | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-vgfpv | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" in 9.8s (9.8s including waiting) | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Created |
Created container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Created |
Created container ovn-acl-logging | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Created |
Created container ovn-controller | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Created |
Created container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Started |
Started container whereabouts-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Created |
Created container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Created |
Created container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Started |
Started container nbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Created |
Created container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-78bcs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Created |
Created container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Created |
Created container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-5vwc5 |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Created |
Created container sbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" in 15.642s (15.642s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Created |
Created container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Created |
Created container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Started |
Started container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-gs6c8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-gs6c8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" in 26.206s (26.206s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Created |
Created container whereabouts-cni-bincopy | |
openshift-network-node-identity |
kubelet |
network-node-identity-gs6c8 |
Created |
Created container approver | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Started |
Started container whereabouts-cni-bincopy | |
default |
ovnkube-csr-approver-controller |
csr-h8v9s |
CSRApproved |
CSR "csr-h8v9s" has been approved | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" in 17.811s (17.811s including waiting) | |
openshift-network-node-identity |
kubelet |
network-node-identity-gs6c8 |
Started |
Started container webhook | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-bb9sk |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5bbb869-7dsfz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" in 25.965s (25.965s including waiting) | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5bbb869-7dsfz |
Created |
Created container ovnkube-cluster-manager | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5bbb869-7dsfz |
Started |
Started container ovnkube-cluster-manager | |
openshift-network-node-identity |
kubelet |
network-node-identity-gs6c8 |
Started |
Started container approver | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-bb9sk |
Created |
Created container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-bb9sk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" in 26.167s (26.167s including waiting) | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vgfpv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-gs6c8 |
Created |
Created container webhook | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Created |
Created container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mpntt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Created |
Created container whereabouts-cni | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-rxzbs |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-rxzbs to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
default |
controlplane |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
ErrorAddingResource |
[cannot allocate hybrid overlay distributed router ip for nodes until all initial pods are processed, failed to set up hybrid overlay logical switch port for ci-op-9xx71rvq-1e28e-w667k-master-2: cannot set up hybrid overlay ports, distributed router ip is nil] | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-rxzbs | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Started |
Started container whereabouts-cni | |
| (x7) | openshift-network-diagnostics |
kubelet |
network-check-target-q4hxn |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-wjjbl" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
| (x7) | openshift-network-diagnostics |
kubelet |
network-check-target-mcbft |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-qvfm9" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
| (x7) | openshift-network-diagnostics |
kubelet |
network-check-target-fmdsm |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-5zfnr" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Created |
Created container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Created |
Created container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Started |
Started container ovn-acl-logging | |
| (x18) | openshift-network-diagnostics |
kubelet |
network-check-target-q4hxn |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Created |
Created container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-xj48s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-cpcp9 |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-cpcp9 to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Created |
Created container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-cpcp9 | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Created |
Created container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Created |
Created container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Created |
Created container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Created |
Created container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Created |
Created container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Created |
Created container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Created |
Created container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Created |
Created container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
| (x18) | openshift-network-diagnostics |
kubelet |
network-check-target-fmdsm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Created |
Created container nbdb | |
| (x18) | openshift-network-diagnostics |
kubelet |
network-check-target-mcbft |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Created |
Created container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Created |
Created container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-rxzbs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-cpcp9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
default |
ovnkube-csr-approver-controller |
csr-wfnnq |
CSRApproved |
CSR "csr-wfnnq" has been approved | |
default |
controlplane |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
ErrorAddingResource |
[cannot allocate hybrid overlay distributed router ip for nodes until all initial pods are processed, failed to set up hybrid overlay logical switch port for ci-op-9xx71rvq-1e28e-w667k-master-0: cannot set up hybrid overlay ports, distributed router ip is nil] | |
default |
controlplane |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
ErrorAddingResource |
[cannot allocate hybrid overlay distributed router ip for nodes until all initial pods are processed, failed to set up hybrid overlay logical switch port for ci-op-9xx71rvq-1e28e-w667k-master-1: cannot set up hybrid overlay ports, distributed router ip is nil] | |
default |
controlplane |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
ErrorUpdatingResource |
failed to set up hybrid overlay logical switch port for ci-op-9xx71rvq-1e28e-w667k-master-2: cannot set up hybrid overlay ports, distributed router ip is nil | |
default |
ovnkube-csr-approver-controller |
csr-5drph |
CSRApproved |
CSR "csr-5drph" has been approved | |
default |
controlplane |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
ErrorUpdatingResource |
failed to set up hybrid overlay logical switch port for ci-op-9xx71rvq-1e28e-w667k-master-1: cannot set up hybrid overlay ports, distributed router ip is nil | |
default |
ovnkube-csr-approver-controller |
csr-zrlhg |
CSRApproved |
CSR "csr-zrlhg" has been approved | |
default |
controlplane |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
ErrorUpdatingResource |
failed to set up hybrid overlay logical switch port for ci-op-9xx71rvq-1e28e-w667k-master-0: cannot set up hybrid overlay ports, distributed router ip is nil | |
default |
ovnkube-csr-approver-controller |
csr-hxp5n |
CSRApproved |
CSR "csr-hxp5n" has been approved | |
openshift-dns-operator |
default-scheduler |
dns-operator-6897b57cbf-6t6wl |
Scheduled |
Successfully assigned openshift-dns-operator/dns-operator-6897b57cbf-6t6wl to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-machine-api |
default-scheduler |
cluster-autoscaler-operator-fffbcbd5b-hpsfj |
Scheduled |
Successfully assigned openshift-machine-api/cluster-autoscaler-operator-fffbcbd5b-hpsfj to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-operator-lifecycle-manager |
default-scheduler |
olm-operator-9958db496-pgws2 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/olm-operator-9958db496-pgws2 to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-controller-manager-operator |
default-scheduler |
openshift-controller-manager-operator-76c7cdf7c8-mtp8c |
Scheduled |
Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-76c7cdf7c8-mtp8c to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-service-ca-operator |
default-scheduler |
service-ca-operator-c8bf8fc99-cjm9q |
Scheduled |
Successfully assigned openshift-service-ca-operator/service-ca-operator-c8bf8fc99-cjm9q to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-config-operator |
default-scheduler |
openshift-config-operator-5cd48fc5bd-w9jqv |
Scheduled |
Successfully assigned openshift-config-operator/openshift-config-operator-5cd48fc5bd-w9jqv to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-machine-config-operator |
default-scheduler |
machine-config-operator-6d64fdfbc-xtlls |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-operator-6d64fdfbc-xtlls to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-kube-controller-manager-operator |
default-scheduler |
kube-controller-manager-operator-699c988f9d-nkb7r |
Scheduled |
Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-699c988f9d-nkb7r to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-machine-api |
default-scheduler |
cluster-baremetal-operator-6475c74794-8hd5r |
Scheduled |
Successfully assigned openshift-machine-api/cluster-baremetal-operator-6475c74794-8hd5r to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-zwkqb | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-888zr | |
openshift-authentication-operator |
default-scheduler |
authentication-operator-5b9b5c7f89-z28dx |
Scheduled |
Successfully assigned openshift-authentication-operator/authentication-operator-5b9b5c7f89-z28dx to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-machine-api |
default-scheduler |
control-plane-machine-set-operator-7f9c9cfdd9-6d8wg |
Scheduled |
Successfully assigned openshift-machine-api/control-plane-machine-set-operator-7f9c9cfdd9-6d8wg to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-etcd-operator |
default-scheduler |
etcd-operator-67976f8796-p7shh |
Scheduled |
Successfully assigned openshift-etcd-operator/etcd-operator-67976f8796-p7shh to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-operator-lifecycle-manager |
default-scheduler |
catalog-operator-9d764bfb9-w5dr5 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-9d764bfb9-w5dr5 to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-machine-api |
default-scheduler |
machine-api-operator-6f847dd5f5-wqkzk |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-operator-6f847dd5f5-wqkzk to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-marketplace |
default-scheduler |
marketplace-operator-867c6b6ccc-rmltl |
Scheduled |
Successfully assigned openshift-marketplace/marketplace-operator-867c6b6ccc-rmltl to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-apiserver-operator |
default-scheduler |
openshift-apiserver-operator-5799f4fc64-s48zf |
Scheduled |
Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-5799f4fc64-s48zf to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-network-operator |
default-scheduler |
iptables-alerter-zwkqb |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-zwkqb to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-image-registry |
default-scheduler |
cluster-image-registry-operator-86c67755bb-2b7lz |
Scheduled |
Successfully assigned openshift-image-registry/cluster-image-registry-operator-86c67755bb-2b7lz to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-network-operator |
default-scheduler |
iptables-alerter-888zr |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-888zr to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-cloud-network-config-controller |
default-scheduler |
cloud-network-config-controller-56cffd86cf-c4tcz |
Scheduled |
Successfully assigned openshift-cloud-network-config-controller/cloud-network-config-controller-56cffd86cf-c4tcz to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-kube-apiserver-operator |
default-scheduler |
kube-apiserver-operator-648fdc585-xghvk |
Scheduled |
Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-648fdc585-xghvk to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-ingress-operator |
default-scheduler |
ingress-operator-66bb9945d4-25hsj |
Scheduled |
Successfully assigned openshift-ingress-operator/ingress-operator-66bb9945d4-25hsj to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-multus |
default-scheduler |
multus-admission-controller-6fc7977fb-zpcvg |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-6fc7977fb-zpcvg to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-multus |
default-scheduler |
multus-admission-controller-6fc7977fb-4v6xp |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-6fc7977fb-4v6xp to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-799db46f99-r6f42 |
Scheduled |
Successfully assigned openshift-monitoring/cluster-monitoring-operator-799db46f99-r6f42 to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-operator-lifecycle-manager |
default-scheduler |
package-server-manager-7c88c666f8-r2wz4 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-7c88c666f8-r2wz4 to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-operator-7f894469fd-mcfdd |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-7f894469fd-mcfdd to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-insights |
default-scheduler |
insights-operator-6c5c749b84-s7zkf |
Scheduled |
Successfully assigned openshift-insights/insights-operator-6c5c749b84-s7zkf to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-kube-storage-version-migrator-operator |
default-scheduler |
kube-storage-version-migrator-operator-7df985cbf9-f4swj |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7df985cbf9-f4swj to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-cluster-storage-operator |
default-scheduler |
cluster-storage-operator-74bf5c6c66-mlzgt |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-74bf5c6c66-mlzgt to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-cloud-credential-operator |
default-scheduler |
cloud-credential-operator-7b984c96f7-zjwpp |
Scheduled |
Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-7b984c96f7-zjwpp to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-kube-scheduler-operator |
default-scheduler |
openshift-kube-scheduler-operator-7759655b55-g5bc2 |
Scheduled |
Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7759655b55-g5bc2 to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-596f48f6bd-s4v8t |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-596f48f6bd-s4v8t to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-cluster-machine-approver |
default-scheduler |
machine-approver-8477dc5fd6-82ddm |
Scheduled |
Successfully assigned openshift-cluster-machine-approver/machine-approver-8477dc5fd6-82ddm to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-699c988f9d-nkb7r |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-648fdc585-xghvk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" | |
openshift-network-operator |
kubelet |
iptables-alerter-888zr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" | |
openshift-network-operator |
kubelet |
iptables-alerter-zwkqb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" | |
openshift-kube-scheduler-operator |
multus |
openshift-kube-scheduler-operator-7759655b55-g5bc2 |
AddedInterface |
Add eth0 [10.129.0.29/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7759655b55-g5bc2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" | |
openshift-etcd-operator |
multus |
etcd-operator-67976f8796-p7shh |
AddedInterface |
Add eth0 [10.129.0.20/23] from ovn-kubernetes | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7b984c96f7-zjwpp |
FailedMount |
MountVolume.SetUp failed for volume "cco-trusted-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7b984c96f7-zjwpp |
FailedMount |
MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7f9c9cfdd9-6d8wg |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-699c988f9d-nkb7r |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-etcd-operator |
kubelet |
etcd-operator-67976f8796-p7shh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" | |
openshift-kube-storage-version-migrator-operator |
multus |
kube-storage-version-migrator-operator-7df985cbf9-f4swj |
AddedInterface |
Add eth0 [10.129.0.23/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-7df985cbf9-f4swj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2217f372554ab69fda40095c92140fd60b05035749446270d5acabc18b956a9b" | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-c8bf8fc99-cjm9q |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f80df79d4e101968318c99f4f8bf6afc7c3729d2c1bf8eaf1fe3894bf8ff066" | |
openshift-insights |
kubelet |
insights-operator-6c5c749b84-s7zkf |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-service-ca-operator |
multus |
service-ca-operator-c8bf8fc99-cjm9q |
AddedInterface |
Add eth0 [10.129.0.25/23] from ovn-kubernetes | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-5799f4fc64-s48zf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:191ff3bb0eed21729ce43c31634050ee410b4db69b64664701cf399f747d150c" | |
openshift-config-operator |
kubelet |
openshift-config-operator-5cd48fc5bd-w9jqv |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-76c7cdf7c8-mtp8c |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5147e93c2e576f931347a59e16d62590879b343d879632c7f0ba3c138cfa575b" | |
openshift-controller-manager-operator |
multus |
openshift-controller-manager-operator-76c7cdf7c8-mtp8c |
AddedInterface |
Add eth0 [10.129.0.26/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-9d764bfb9-w5dr5 |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-apiserver-operator |
multus |
openshift-apiserver-operator-5799f4fc64-s48zf |
AddedInterface |
Add eth0 [10.129.0.19/23] from ovn-kubernetes | |
openshift-cloud-network-config-controller |
multus |
cloud-network-config-controller-56cffd86cf-c4tcz |
AddedInterface |
Add eth0 [10.129.0.33/23] from ovn-kubernetes | |
openshift-cloud-network-config-controller |
kubelet |
cloud-network-config-controller-56cffd86cf-c4tcz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fbaae7684d6ac205ebd327f527be846cf3dce959ab41648405ab5d6b20e03fd" | |
openshift-authentication-operator |
kubelet |
authentication-operator-5b9b5c7f89-z28dx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b6392aef797fc81a43507586d4924fb2f4eca833e6b01bb431df4d70849284" | |
openshift-authentication-operator |
multus |
authentication-operator-5b9b5c7f89-z28dx |
AddedInterface |
Add eth0 [10.129.0.31/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
multus |
kube-apiserver-operator-648fdc585-xghvk |
AddedInterface |
Add eth0 [10.129.0.14/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-operator-7f894469fd-mcfdd |
AddedInterface |
Add eth0 [10.129.0.22/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
multus |
cluster-storage-operator-74bf5c6c66-mlzgt |
AddedInterface |
Add eth0 [10.129.0.16/23] from ovn-kubernetes | |
openshift-insights |
kubelet |
insights-operator-6c5c749b84-s7zkf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eade372bea1974bf9b2e7fefd818ff900b0c6b1ff4b80107fc3f378b95861420" | |
openshift-config-operator |
kubelet |
openshift-config-operator-5cd48fc5bd-w9jqv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:180455ad3917ad67485cd8000cef48ac66a57d6f39952262d9b0eb48b49f7e3c" | |
openshift-config-operator |
multus |
openshift-config-operator-5cd48fc5bd-w9jqv |
AddedInterface |
Add eth0 [10.129.0.24/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-699c988f9d-nkb7r |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7f894469fd-mcfdd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ee1afc39ae2d94050f03f8c02343419efe6a53c3f5fdbc9d0bbd154e7efc82a" | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-74bf5c6c66-mlzgt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e805f1ea4410781909560e7065cbb4d7ea50ca32b91b98e16f31216290bfc2a3" | |
openshift-kube-controller-manager-operator |
multus |
kube-controller-manager-operator-699c988f9d-nkb7r |
AddedInterface |
Add eth0 [10.129.0.18/23] from ovn-kubernetes | |
openshift-insights |
multus |
insights-operator-6c5c749b84-s7zkf |
AddedInterface |
Add eth0 [10.129.0.15/23] from ovn-kubernetes | |
openshift-network-operator |
kubelet |
iptables-alerter-zwkqb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" in 4.34s (4.34s including waiting) | |
openshift-network-operator |
default-scheduler |
iptables-alerter-j88xk |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-j88xk to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-j88xk | |
openshift-network-operator |
kubelet |
iptables-alerter-j88xk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" | |
openshift-network-operator |
kubelet |
iptables-alerter-zwkqb |
Created |
Created container iptables-alerter | |
openshift-network-operator |
kubelet |
iptables-alerter-zwkqb |
Started |
Started container iptables-alerter | |
openshift-network-diagnostics |
multus |
network-check-target-fmdsm |
AddedInterface |
Add eth0 [10.128.0.4/23] from ovn-kubernetes | |
openshift-network-operator |
kubelet |
iptables-alerter-j88xk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" in 5.077s (5.077s including waiting) | |
openshift-network-diagnostics |
multus |
network-check-target-q4hxn |
AddedInterface |
Add eth0 [10.130.0.4/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-76c7cdf7c8-mtp8c |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5147e93c2e576f931347a59e16d62590879b343d879632c7f0ba3c138cfa575b" in 15.42s (15.42s including waiting) | |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-596f48f6bd-s4v8t |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-596f48f6bd-s4v8t |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
| (x6) | openshift-dns-operator |
kubelet |
dns-operator-6897b57cbf-6t6wl |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7759655b55-g5bc2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" in 15.745s (15.745s including waiting) | |
openshift-etcd-operator |
kubelet |
etcd-operator-67976f8796-p7shh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" in 15.412s (15.412s including waiting) | |
| (x6) | openshift-cluster-machine-approver |
kubelet |
machine-approver-8477dc5fd6-82ddm |
FailedMount |
MountVolume.SetUp failed for volume "machine-approver-tls" : secret "machine-approver-tls" not found |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-648fdc585-xghvk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" in 15.532s (15.532s including waiting) | |
openshift-config-operator |
kubelet |
openshift-config-operator-5cd48fc5bd-w9jqv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:180455ad3917ad67485cd8000cef48ac66a57d6f39952262d9b0eb48b49f7e3c" in 14.23s (14.23s including waiting) | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7f894469fd-mcfdd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ee1afc39ae2d94050f03f8c02343419efe6a53c3f5fdbc9d0bbd154e7efc82a" in 14.472s (14.472s including waiting) | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7f894469fd-mcfdd |
Started |
Started container csi-snapshot-controller-operator | |
openshift-insights |
kubelet |
insights-operator-6c5c749b84-s7zkf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eade372bea1974bf9b2e7fefd818ff900b0c6b1ff4b80107fc3f378b95861420" in 14.286s (14.286s including waiting) | |
openshift-config-operator |
kubelet |
openshift-config-operator-5cd48fc5bd-w9jqv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f9b07f19aafce26ce2e4bbdd2468b5f5e79842eb97811bfa4d83395c98dd6c36" | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-74bf5c6c66-mlzgt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e805f1ea4410781909560e7065cbb4d7ea50ca32b91b98e16f31216290bfc2a3" in 14.644s (14.644s including waiting) | |
openshift-config-operator |
kubelet |
openshift-config-operator-5cd48fc5bd-w9jqv |
Started |
Started container openshift-api | |
openshift-config-operator |
kubelet |
openshift-config-operator-5cd48fc5bd-w9jqv |
Created |
Created container openshift-api | |
openshift-network-operator |
kubelet |
iptables-alerter-888zr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" in 15.517s (15.517s including waiting) | |
openshift-network-diagnostics |
multus |
network-check-target-mcbft |
AddedInterface |
Add eth0 [10.129.0.5/23] from ovn-kubernetes | |
openshift-authentication-operator |
kubelet |
authentication-operator-5b9b5c7f89-z28dx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b6392aef797fc81a43507586d4924fb2f4eca833e6b01bb431df4d70849284" in 15.318s (15.318s including waiting) | |
| (x5) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7b984c96f7-zjwpp |
FailedMount |
MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : secret "cloud-credential-operator-serving-cert" not found |
openshift-service-ca-operator |
kubelet |
service-ca-operator-c8bf8fc99-cjm9q |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f80df79d4e101968318c99f4f8bf6afc7c3729d2c1bf8eaf1fe3894bf8ff066" in 15.139s (15.139s including waiting) | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-699c988f9d-nkb7r |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" in 14.227s (14.227s including waiting) | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-7df985cbf9-f4swj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2217f372554ab69fda40095c92140fd60b05035749446270d5acabc18b956a9b" in 15.445s (15.445s including waiting) | |
openshift-cloud-network-config-controller |
kubelet |
cloud-network-config-controller-56cffd86cf-c4tcz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fbaae7684d6ac205ebd327f527be846cf3dce959ab41648405ab5d6b20e03fd" in 15.062s (15.062s including waiting) | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-5799f4fc64-s48zf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:191ff3bb0eed21729ce43c31634050ee410b4db69b64664701cf399f747d150c" in 15.746s (15.746s including waiting) | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7f894469fd-mcfdd |
Created |
Created container csi-snapshot-controller-operator | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller |
csi-snapshot-controller-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/csi-snapshot-controller-pdb -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-5677697b57 |
SuccessfulCreate |
Created pod: csi-snapshot-controller-5677697b57-bt84b | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller |
csi-snapshot-controller-operator |
ServiceAccountCreated |
Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller |
csi-snapshot-controller-operator |
ServiceAccountCreated |
Created ServiceAccount/csi-snapshot-webhook -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller |
csi-snapshot-controller-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/csi-snapshot-webhook-clusterrole because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller |
csi-snapshot-controller-operator |
ServiceCreated |
Created Service/csi-snapshot-webhook -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller |
csi-snapshot-controller-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/csi-snapshot-webhook-clusterrolebinding because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}] | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-5677697b57 |
SuccessfulCreate |
Created pod: csi-snapshot-controller-5677697b57-np5dk | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotwebhookcontroller-deployment-controller--csisnapshotwebhookcontroller |
csi-snapshot-controller-operator |
DeploymentCreated |
Created Deployment.apps/csi-snapshot-webhook -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller |
csi-snapshot-controller-operator |
DeploymentCreated |
Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller |
csi-snapshot-controller-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/csi-snapshot-webhook-pdb -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from Unknown to False ("All is well"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotWebhookControllerAvailable: Waiting for Deployment") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes",Available message changed from "CSISnapshotWebhookControllerAvailable: Waiting for Deployment" to "CSISnapshotControllerAvailable: Waiting for Deployment\nCSISnapshotWebhookControllerAvailable: Waiting for Deployment" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-7f894469fd-mcfdd_880b66bf-18af-451c-bce5-55b0a32881e9 became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing | |
| (x2) | openshift-cluster-storage-operator |
controllermanager |
csi-snapshot-controller-pdb |
NoPods |
No matching pods found |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-5677697b57 to 2 | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-5677697b57-np5dk |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-5677697b57-np5dk to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-webhook-6ff94d4dc8-5vlq8 |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-webhook-6ff94d4dc8-5vlq8 to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-webhook-6ff94d4dc8-dzpl8 |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-webhook-6ff94d4dc8-dzpl8 to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-webhook-6ff94d4dc8 |
SuccessfulCreate |
Created pod: csi-snapshot-webhook-6ff94d4dc8-5vlq8 | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-webhook-6ff94d4dc8 |
SuccessfulCreate |
Created pod: csi-snapshot-webhook-6ff94d4dc8-dzpl8 | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-webhook |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-webhook-6ff94d4dc8 to 2 | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-5677697b57-bt84b |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-5677697b57-bt84b to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: StorageClass provided by supplied CSI Driver instead of the cluster-storage-operator") | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded set to False ("All is well"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"} {"sharedresource.openshift.io" "sharedconfigmaps" "" ""} {"sharedresource.openshift.io" "sharedsecrets" "" ""}],status.versions changed from [] to [{"operator" "4.16.0-0.nightly-2024-06-10-211334"}] | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorVersionChanged |
clusteroperator/storage version "operator" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-74bf5c6c66-mlzgt_2549786b-2973-4b19-88b2-0633462564ca became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-7759655b55-g5bc2_b408cec1-5550-413c-9bd5-2a4e38375216 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes" | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-5677697b57-np5dk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0d14e1f2e418e264cd5e0ac7f27dc41a10afbe1d8ccde91062f1db6a82007f02" | |
| (x8) | openshift-cluster-csi-drivers |
replicaset-controller |
azure-disk-csi-driver-operator-7fcb8db8c9 |
FailedCreate |
Error creating: pods "azure-disk-csi-driver-operator-7fcb8db8c9-" is forbidden: error looking up service account openshift-cluster-csi-drivers/azure-disk-csi-driver-operator: serviceaccount "azure-disk-csi-driver-operator" not found |
openshift-cluster-csi-drivers |
deployment-controller |
azure-disk-csi-driver-operator |
ScalingReplicaSet |
Scaled up replica set azure-disk-csi-driver-operator-7fcb8db8c9 to 1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]"),Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/openshift-kube-scheduler-guard-pdb -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.16.0-0.nightly-2024-06-10-211334"}] | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"), + }, + "minTLSVersion": string("VersionTLS12"), + }, } | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-5b9b5c7f89-z28dx_99c0731c-3e57-4972-8299-88f5bddb70e7 became leader | |
| (x8) | openshift-cluster-csi-drivers |
replicaset-controller |
azure-file-csi-driver-operator-66b9ff7945 |
FailedCreate |
Error creating: pods "azure-file-csi-driver-operator-66b9ff7945-" is forbidden: error looking up service account openshift-cluster-csi-drivers/azure-file-csi-driver-operator: serviceaccount "azure-file-csi-driver-operator" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-648fdc585-xghvk_aff72bc6-761f-4eef-b217-2e043c5989af became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-serviceaccountissuercontroller |
kube-apiserver-operator |
ServiceAccountIssuer |
Issuer set to default value "https://kubernetes.default.svc" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: missing notAfter | |
openshift-cluster-csi-drivers |
deployment-controller |
azure-file-csi-driver-operator |
ScalingReplicaSet |
Scaled up replica set azure-file-csi-driver-operator-66b9ff7945 to 1 | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"} {"sharedresource.openshift.io" "sharedconfigmaps" "" ""} {"sharedresource.openshift.io" "sharedsecrets" "" ""}] to [{"" "serviceaccounts" "openshift-cluster-csi-drivers" "azure-disk-csi-driver-operator"} {"rbac.authorization.k8s.io" "roles" "openshift-cluster-csi-drivers" "azure-disk-csi-driver-operator-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "azure-disk-csi-driver-operator-clusterrole"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "azure-disk-csi-driver-operator-clusterrolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-cluster-csi-drivers" "azure-disk-csi-driver-operator-rolebinding"} {"operator.openshift.io" "clustercsidrivers" "" "disk.csi.azure.com"} {"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"} {"sharedresource.openshift.io" "sharedconfigmaps" "" ""} {"sharedresource.openshift.io" "sharedsecrets" "" ""}] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded set to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.16.0-0.nightly-2024-06-10-211334"}] | |
openshift-network-operator |
kubelet |
iptables-alerter-j88xk |
Started |
Started container iptables-alerter | |
openshift-network-operator |
kubelet |
iptables-alerter-j88xk |
Created |
Created container iptables-alerter | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorVersionChanged |
clusteroperator/etcd version "raw-internal" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-AzureDisk |
cluster-storage-operator |
DeploymentCreated |
Created Deployment.apps/azure-disk-csi-driver-operator -n openshift-cluster-csi-drivers because it was missing | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "operator" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "authentications" "" "cluster"} {"config.openshift.io" "authentications" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"route.openshift.io" "routes" "openshift-authentication" "oauth-openshift"} {"" "services" "openshift-authentication" "oauth-openshift"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-authentication"} {"" "namespaces" "" "openshift-authentication-operator"} {"" "namespaces" "" "openshift-ingress"} {"" "namespaces" "" "openshift-oauth-apiserver"}],status.versions changed from [] to [{"operator" "4.16.0-0.nightly-2024-06-10-211334"}] | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-AzureFile |
cluster-storage-operator |
DeploymentCreated |
Created Deployment.apps/azure-file-csi-driver-operator -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: status.relatedObjects changed from [{"" "serviceaccounts" "openshift-cluster-csi-drivers" "azure-disk-csi-driver-operator"} {"rbac.authorization.k8s.io" "roles" "openshift-cluster-csi-drivers" "azure-disk-csi-driver-operator-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "azure-disk-csi-driver-operator-clusterrole"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "azure-disk-csi-driver-operator-clusterrolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-cluster-csi-drivers" "azure-disk-csi-driver-operator-rolebinding"} {"operator.openshift.io" "clustercsidrivers" "" "disk.csi.azure.com"} {"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"} {"sharedresource.openshift.io" "sharedconfigmaps" "" ""} {"sharedresource.openshift.io" "sharedsecrets" "" ""}] to [{"" "serviceaccounts" "openshift-cluster-csi-drivers" "azure-disk-csi-driver-operator"} {"rbac.authorization.k8s.io" "roles" "openshift-cluster-csi-drivers" "azure-disk-csi-driver-operator-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "azure-disk-csi-driver-operator-clusterrole"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "azure-disk-csi-driver-operator-clusterrolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-cluster-csi-drivers" "azure-disk-csi-driver-operator-rolebinding"} {"operator.openshift.io" "clustercsidrivers" "" "disk.csi.azure.com"} {"" "serviceaccounts" "openshift-cluster-csi-drivers" "azure-file-csi-driver-operator"} {"rbac.authorization.k8s.io" "roles" "openshift-cluster-csi-drivers" "azure-file-csi-driver-operator-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-cluster-csi-drivers" "azure-file-csi-driver-operator-rolebinding"} {"rbac.authorization.k8s.io" "clusterroles" "" "azure-file-csi-driver-operator-clusterrole"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "azure-file-csi-driver-operator-clusterrolebinding"} {"operator.openshift.io" "clustercsidrivers" "" "file.csi.azure.com"} {"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"} {"sharedresource.openshift.io" "sharedconfigmaps" "" ""} {"sharedresource.openshift.io" "sharedsecrets" "" ""}] | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing changed from False to True ("AzureDiskProgressing: Waiting for Deployment to act on changes") | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-AzureDisk |
cluster-storage-operator |
ClusterCSIDriverCreated |
Created ClusterCSIDriver.operator.openshift.io/disk.csi.azure.com -n openshift-cluster-csi-drivers because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-67976f8796-p7shh_e362bd77-de61-49f0-a74a-3af46f65c5f2 became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-azurediskcsidriveroperatorstaticcontroller-azurediskcsidriveroperatorstaticcontroller |
cluster-storage-operator |
ServiceAccountCreated |
Created ServiceAccount/azure-disk-csi-driver-operator -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-azurediskcsidriveroperatorstaticcontroller-azurediskcsidriveroperatorstaticcontroller |
cluster-storage-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/azure-disk-csi-driver-operator-role -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-azurediskcsidriveroperatorstaticcontroller-azurediskcsidriveroperatorstaticcontroller |
cluster-storage-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/azure-disk-csi-driver-operator-clusterrole because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-azurediskcsidriveroperatorstaticcontroller-azurediskcsidriveroperatorstaticcontroller |
cluster-storage-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/azure-disk-csi-driver-operator-clusterrolebinding because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-699c988f9d-nkb7r_0043b526-0b4c-4a74-a4bd-94531c4b57a4 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.16.0-0.nightly-2024-06-10-211334"}] | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-nodecontroller |
kube-controller-manager-operator |
MasterNodeObserved |
Observed new master node ci-op-9xx71rvq-1e28e-w667k-master-2 |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-nodecontroller |
kube-controller-manager-operator |
MasterNodeObserved |
Observed new master node ci-op-9xx71rvq-1e28e-w667k-master-1 |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-nodecontroller |
kube-controller-manager-operator |
MasterNodeObserved |
Observed new master node ci-op-9xx71rvq-1e28e-w667k-master-0 |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-5677697b57-bt84b |
AddedInterface |
Add eth0 [10.130.0.7/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-5677697b57-bt84b |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0d14e1f2e418e264cd5e0ac7f27dc41a10afbe1d8ccde91062f1db6a82007f02" | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-5677697b57-np5dk |
AddedInterface |
Add eth0 [10.128.0.8/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "build": map[string]any{ + "buildDefaults": map[string]any{"resources": map[string]any{}}, + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cee9993b6f"...), + }, + }, + "controllers": []any{ + string("openshift.io/build"), string("openshift.io/build-config-change"), + string("openshift.io/builder-rolebindings"), + string("openshift.io/builder-serviceaccount"), + string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), + string("openshift.io/deployer-rolebindings"), + string("openshift.io/deployer-serviceaccount"), + string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), + string("openshift.io/image-puller-rolebindings"), + string("openshift.io/image-signature-import"), + string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), + string("openshift.io/ingress-to-route"), + string("openshift.io/origin-namespace"), ..., + }, + "deployer": map[string]any{ + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d87b9ddf5e"...), + }, + }, + "featureGates": []any{string("BuildCSIVolumes=true")}, + "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, } | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-azurefilecsidriveroperatorstaticcontroller-azurefilecsidriveroperatorstaticcontroller |
cluster-storage-operator |
ServiceAccountCreated |
Created ServiceAccount/azure-file-csi-driver-operator -n openshift-cluster-csi-drivers because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources |
kube-storage-version-migrator-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources |
kube-storage-version-migrator-operator |
NamespaceCreated |
Created Namespace/openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.versions changed from [] to [{"operator" "4.16.0-0.nightly-2024-06-10-211334"}] | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorVersionChanged |
clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreateFailed |
Failed to create Deployment.apps/route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-7df985cbf9-f4swj_91dd2dfd-d18d-400d-b54a-0353a6fdca58 became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-cluster-csi-drivers |
default-scheduler |
azure-disk-csi-driver-operator-7fcb8db8c9-bmkwq |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-disk-csi-driver-operator-7fcb8db8c9-bmkwq to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready") | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-azurefilecsidriveroperatorstaticcontroller-azurefilecsidriveroperatorstaticcontroller |
cluster-storage-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/azure-file-csi-driver-operator-rolebinding -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded message changed from "All is well" to "AzureDiskCSIDriverOperatorDeploymentDegraded: deployment openshift-cluster-csi-drivers/azure-disk-csi-driver-operator has some pods failing; unavailable replicas=1" | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-azurefilecsidriveroperatorstaticcontroller-azurefilecsidriveroperatorstaticcontroller |
cluster-storage-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/azure-file-csi-driver-operator-role -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-azurediskcsidriveroperatorstaticcontroller-azurediskcsidriveroperatorstaticcontroller |
cluster-storage-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/azure-disk-csi-driver-operator-rolebinding -n openshift-cluster-csi-drivers because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources |
kube-storage-version-migrator-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to BuildCSIVolumes=true | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-c8bf8fc99-cjm9q_f03cc8f9-fdc2-483e-a714-75565ab77e70 became leader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from Unknown to False ("All is well"),Available changed from Unknown to False ("OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."),Upgradeable changed from Unknown to True ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-76c7cdf7c8-mtp8c_229e1965-f8f3-49c3-9dc7-750a5280abdc became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
TargetUpdateRequired |
"etcd-peer-ci-op-9xx71rvq-1e28e-w667k-master-0" in "openshift-etcd" requires a new target cert/key pair: missing notAfter | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}] | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well") | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
NamespaceCreated |
Created Namespace/openshift-service-ca because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from Unknown to False ("All is well") | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/kube-controller-manager-guard-pdb -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 3 nodes are at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0") | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded set to False ("All is well"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.16.0-0.nightly-2024-06-10-211334"}] | |
openshift-cluster-csi-drivers |
replicaset-controller |
azure-file-csi-driver-operator-66b9ff7945 |
SuccessfulCreate |
Created pod: azure-file-csi-driver-operator-66b9ff7945-fpvl2 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-7777c4d45c to 3 | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-operator-66b9ff7945-fpvl2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d132ab75ab591682220976b04e6e82e37482fa971fd9e3576f8f144095897eec" | |
openshift-cluster-csi-drivers |
multus |
azure-file-csi-driver-operator-66b9ff7945-fpvl2 |
AddedInterface |
Add eth0 [10.128.0.10/23] from ovn-kubernetes | |
openshift-cluster-csi-drivers |
default-scheduler |
azure-file-csi-driver-operator-66b9ff7945-fpvl2 |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-file-csi-driver-operator-66b9ff7945-fpvl2 to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
FastControllerResync |
Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.16.0-0.nightly-2024-06-10-211334"}] | |
| (x2) | openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "operator" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-5799f4fc64-s48zf_d413c9d2-c5cc-4128-ae05-bc0d943df857 became leader | |
| (x6) | openshift-controller-manager |
replicaset-controller |
controller-manager-7777c4d45c |
FailedCreate |
Error creating: pods "controller-manager-7777c4d45c-" is forbidden: error looking up service account openshift-controller-manager/openshift-controller-manager-sa: serviceaccount "openshift-controller-manager-sa" not found |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-route-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator namespace | |
openshift-cluster-csi-drivers |
replicaset-controller |
azure-disk-csi-driver-operator-7fcb8db8c9 |
SuccessfulCreate |
Created pod: azure-disk-csi-driver-operator-7fcb8db8c9-bmkwq | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager namespace | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-operator-7fcb8db8c9-bmkwq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b7fcff180a9d703eaff4eed0aaa4879bc21f6ff1f39c55f4836a2a135eb5da44" | |
openshift-cluster-csi-drivers |
multus |
azure-disk-csi-driver-operator-7fcb8db8c9-bmkwq |
AddedInterface |
Add eth0 [10.128.0.9/23] from ovn-kubernetes | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from Unknown to False ("APIServicesAvailable: endpoints \"api\" not found") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-5677697b57-bt84b |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0d14e1f2e418e264cd5e0ac7f27dc41a10afbe1d8ccde91062f1db6a82007f02" in 2.403s (2.403s including waiting) | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-5677697b57-bt84b |
Started |
Started container snapshot-controller | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-5677697b57-np5dk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0d14e1f2e418e264cd5e0ac7f27dc41a10afbe1d8ccde91062f1db6a82007f02" in 2.655s (2.655s including waiting) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/kube-apiserver-guard-pdb -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-storage-version-migrator |
deployment-controller |
migrator |
ScalingReplicaSet |
Scaled up replica set migrator-788b5f7c5c to 1 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: ",Progressing changed from Unknown to False ("All is well") | |
openshift-kube-storage-version-migrator |
replicaset-controller |
migrator-788b5f7c5c |
SuccessfulCreate |
Created pod: migrator-788b5f7c5c-5tg6z | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("RevisionControllerDegraded: configmap \"audit\" not found"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-config-operator |
kubelet |
openshift-config-operator-5cd48fc5bd-w9jqv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f9b07f19aafce26ce2e4bbdd2468b5f5e79842eb97811bfa4d83395c98dd6c36" in 4.625s (4.625s including waiting) | |
openshift-controller-manager |
default-scheduler |
controller-manager-7777c4d45c-7lj9c |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7777c4d45c-7lj9c to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-7777c4d45c-7lj9c |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-7777c4d45c-7lj9c |
FailedMount |
MountVolume.SetUp failed for volume "config" : configmap "config" not found |
openshift-kube-storage-version-migrator |
default-scheduler |
migrator-788b5f7c5c-5tg6z |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator/migrator-788b5f7c5c-5tg6z to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-controller-manager |
default-scheduler |
controller-manager-7777c4d45c-h8m2s |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7777c4d45c-h8m2s to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-7777c4d45c-h8m2s |
FailedMount |
MountVolume.SetUp failed for volume "config" : configmap "config" not found |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-7777c4d45c-h8m2s |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found |
openshift-controller-manager |
default-scheduler |
controller-manager-7777c4d45c-v4264 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7777c4d45c-v4264 to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-7777c4d45c-v4264 |
FailedMount |
MountVolume.SetUp failed for volume "config" : configmap "config" not found |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"apiServerArguments\": map[string]any{\n+ \t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+ \t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\t\"tls-cipher-suites\": []any{\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...),\n+ \t\t\t\tstring(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_S\"...),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM\"...),\n+ \t\t\t\tstring(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_S\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t},\n )\n" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIAudiences |
service account issuer changed from to https://kubernetes.default.svc | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator |
kube-storage-version-migrator-operator |
DeploymentCreated |
Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods" | |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-7777c4d45c-v4264 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Available message changed from "CSISnapshotControllerAvailable: Waiting for Deployment\nCSISnapshotWebhookControllerAvailable: Waiting for Deployment" to "CSISnapshotWebhookControllerAvailable: Waiting for Deployment" | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7777c4d45c |
SuccessfulCreate |
Created pod: controller-manager-7777c4d45c-7lj9c | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7777c4d45c |
SuccessfulCreate |
Created pod: controller-manager-7777c4d45c-v4264 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7777c4d45c |
SuccessfulCreate |
Created pod: controller-manager-7777c4d45c-h8m2s | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "AzureDiskProgressing: Waiting for Deployment to act on changes" to "AzureDiskProgressing: Waiting for Deployment to deploy pods" | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-azurefilecsidriveroperatorstaticcontroller-azurefilecsidriveroperatorstaticcontroller |
cluster-storage-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/azure-file-csi-driver-operator-clusterrolebinding because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-AzureFile |
cluster-storage-operator |
ClusterCSIDriverCreated |
Created ClusterCSIDriver.operator.openshift.io/file.csi.azure.com -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-azurefilecsidriveroperatorstaticcontroller-azurefilecsidriveroperatorstaticcontroller |
cluster-storage-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/azure-file-csi-driver-operator-clusterrole because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-5677697b57-bt84b |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-5677697b57-bt84b became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-scheduler because it changed | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
SecretCreated |
Created Secret/signing-key -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ServiceAccountCreated |
Created ServiceAccount/service-ca -n openshift-service-ca because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-5677697b57-bt84b |
Created |
Created container snapshot-controller | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-defrag-controller-defragcontroller |
etcd-operator |
DefragControllerUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
TargetUpdateRequired |
"etcd-serving-ci-op-9xx71rvq-1e28e-w667k-master-0" in "openshift-etcd" requires a new target cert/key pair: missing notAfter | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
SecretCreated |
Created Secret/etcd-peer-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMissing |
no observedConfig | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "extendedArguments": map[string]any{ + "cluster-cidr": []any{string("10.128.0.0/14")}, + "cluster-name": []any{string("ci-op-9xx71rvq-1e28e-w667k")}, + "feature-gates": []any{ + string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), + string("AutomatedEtcdBackup=false"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("CSIDriverSharedResource=false"), string("ChunkSizeMiB=false"), ..., + }, + "service-cluster-ip-range": []any{string("172.30.0.0/16")}, + }, + "featureGates": []any{ + string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), + string("AutomatedEtcdBackup=false"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("CSIDriverSharedResource=false"), string("ChunkSizeMiB=false"), + string("CloudDualStackNodeIPs=true"), string("ClusterAPIInstall=false"), + string("ClusterAPIInstallAWS=true"), string("ClusterAPIInstallAzure=false"), + string("ClusterAPIInstallGCP=false"), + string("ClusterAPIInstallIBMCloud=false"), + string("ClusterAPIInstallNutanix=true"), + string("ClusterAPIInstallOpenStack=true"), ..., + }, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"), + }, + "minTLSVersion": string("VersionTLS12"), + }, } | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated extendedArguments.feature-gates to AdminNetworkPolicy=true,AlibabaPlatform=true,AutomatedEtcdBackup=false,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,CSIDriverSharedResource=false,ChunkSizeMiB=false,CloudDualStackNodeIPs=true,ClusterAPIInstall=false,ClusterAPIInstallAWS=true,ClusterAPIInstallAzure=false,ClusterAPIInstallGCP=false,ClusterAPIInstallIBMCloud=false,ClusterAPIInstallNutanix=true,ClusterAPIInstallOpenStack=true,ClusterAPIInstallPowerVS=false,ClusterAPIInstallVSphere=true,DNSNameResolver=false,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalCloudProviderAzure=true,ExternalCloudProviderExternal=true,ExternalCloudProviderGCP=true,ExternalOIDC=false,ExternalRouteCertificate=false,GCPClusterHostedDNS=false,GCPLabelsTags=false,GatewayAPI=false,HardwareSpeed=true,ImagePolicy=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InstallAlternateInfrastructureAWS=false,KMSv1=true,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImages=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MetricsServer=true,MixedCPUsAllocation=false,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NewOLM=false,NodeDisruptionPolicy=false,NodeSwap=false,OnClusterBuild=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,PrivateHostedZoneAWS=true,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=false,VSphereStaticIPs=true,ValidatingAdmissionPolicy=false,VolumeGroupSnapshot=false | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to AdminNetworkPolicy=true,AlibabaPlatform=true,AutomatedEtcdBackup=false,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,CSIDriverSharedResource=false,ChunkSizeMiB=false,CloudDualStackNodeIPs=true,ClusterAPIInstall=false,ClusterAPIInstallAWS=true,ClusterAPIInstallAzure=false,ClusterAPIInstallGCP=false,ClusterAPIInstallIBMCloud=false,ClusterAPIInstallNutanix=true,ClusterAPIInstallOpenStack=true,ClusterAPIInstallPowerVS=false,ClusterAPIInstallVSphere=true,DNSNameResolver=false,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalCloudProvider=true,ExternalCloudProviderAzure=true,ExternalCloudProviderExternal=true,ExternalCloudProviderGCP=true,ExternalOIDC=false,ExternalRouteCertificate=false,GCPClusterHostedDNS=false,GCPLabelsTags=false,GatewayAPI=false,HardwareSpeed=true,ImagePolicy=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InstallAlternateInfrastructureAWS=false,KMSv1=true,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImages=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MetricsServer=true,MixedCPUsAllocation=false,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NewOLM=false,NodeDisruptionPolicy=false,NodeSwap=false,OnClusterBuild=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,PrivateHostedZoneAWS=true,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=false,VSphereStaticIPs=true,ValidatingAdmissionPolicy=false,VolumeGroupSnapshot=false | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
CABundleUpdateRequired |
"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-route-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
SecretCreated |
Created Secret/etcd-serving-metrics-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-etcd because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageFailed |
configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing | |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-7777c4d45c-v4264 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-7777c4d45c-v4264 |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
TargetUpdateRequired |
"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: missing notAfter | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
TargetConfigDeleted |
Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist | |
openshift-service-ca |
deployment-controller |
service-ca |
ScalingReplicaSet |
Scaled up replica set service-ca-6ff4f55f67 to 1 | |
| (x3) | openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" |
openshift-service-ca |
replicaset-controller |
service-ca-6ff4f55f67 |
SuccessfulCreate |
Created pod: service-ca-6ff4f55f67-szqhs | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveCloudProviderNamesChanges |
CloudProvider config file changed to /etc/kubernetes/static-pod-resources/configmaps/cloud-config/config | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://localhost:2379 | |
openshift-service-ca |
kubelet |
service-ca-6ff4f55f67-szqhs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f80df79d4e101968318c99f4f8bf6afc7c3729d2c1bf8eaf1fe3894bf8ff066" | |
openshift-service-ca |
multus |
service-ca-6ff4f55f67-szqhs |
AddedInterface |
Add eth0 [10.130.0.12/23] from ovn-kubernetes | |
openshift-service-ca |
default-scheduler |
service-ca-6ff4f55f67-szqhs |
Scheduled |
Successfully assigned openshift-service-ca/service-ca-6ff4f55f67-szqhs to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert | |
| (x3) | openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-5677697b57-np5dk |
Started |
Started container snapshot-controller | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: status.versions changed from [] to [{"operator" "4.16.0-0.nightly-2024-06-10-211334"} {"csi-snapshot-controller" "4.16.0-0.nightly-2024-06-10-211334"}] | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-5c89cb9bc9 to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-7777c4d45c to 2 from 3 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: missing notAfter | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
SecretCreated |
Created Secret/etcd-serving-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller |
etcd-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
TargetUpdateRequired |
"etcd-serving-metrics-ci-op-9xx71rvq-1e28e-w667k-master-0" in "openshift-etcd" requires a new target cert/key pair: missing notAfter | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources |
etcd-operator |
NamespaceUpdated |
Updated Namespace/openshift-etcd because it changed | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-78b66d7c68 to 3 | |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-7777c4d45c-7lj9c |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "admission": map[string]any{ + "pluginConfig": map[string]any{ + "PodSecurity": map[string]any{"configuration": map[string]any{...}}, + "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, + "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, + }, + }, + "apiServerArguments": map[string]any{ + "api-audiences": []any{string("https://kubernetes.default.svc")}, + "cloud-config": []any{string("/etc/kubernetes/static-pod-resources/configmaps/cloud-config/config")}, + "etcd-servers": []any{string("https://localhost:2379")}, + "feature-gates": []any{ + string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), + string("AutomatedEtcdBackup=false"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("CSIDriverSharedResource=false"), string("ChunkSizeMiB=false"), ..., + }, + "send-retry-after-while-not-ready-once": []any{string("false")}, + "service-account-issuer": []any{string("https://kubernetes.default.svc")}, + "service-account-jwks-uri": []any{string("https://api.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.c"...)}, + }, + "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, + "servicesSubnet": string("172.30.0.0/16"), + "servingInfo": map[string]any{ + "bindAddress": string("0.0.0.0:6443"), + "bindNetwork": string("tcp4"), + "cipherSuites": []any{ + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"), + }, + "minTLSVersion": string("VersionTLS12"), + "namedCertificates": []any{ + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-resou"...), + "keyFile": string("/etc/kubernetes/static-pod-resou"...), + }, + }, + }, } | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "All is well" to "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ") | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-5677697b57-np5dk |
Created |
Created container snapshot-controller | |
| (x4) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well") | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"feature-gates" "4.16.0-0.nightly-2024-06-10-211334"} {"operator" "4.16.0-0.nightly-2024-06-10-211334"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: " to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: " to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: " | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-storage-version-migrator |
multus |
migrator-788b5f7c5c-5tg6z |
AddedInterface |
Add eth0 [10.130.0.10/23] from ovn-kubernetes | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
DeploymentCreated |
Created Deployment.apps/service-ca -n openshift-service-ca because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
ServiceCreated |
Created Service/scheduler -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "apiServerArguments": map[string]any{ + "feature-gates": []any{ + string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), + string("AutomatedEtcdBackup=false"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("CSIDriverSharedResource=false"), string("ChunkSizeMiB=false"), ..., + }, + }, + "projectConfig": map[string]any{"projectRequestMessage": string("")}, + "routingConfig": map[string]any{ + "subdomain": string("apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com"), + }, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"), + }, + "minTLSVersion": string("VersionTLS12"), + }, } | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
authentication-operator |
FastControllerResync |
Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
RoutingConfigSubdomainChanged |
Domain changed from "" to "apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com" | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-5cd48fc5bd-w9jqv_d05578a2-e82a-415b-a50c-3ee779dd56ac became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
TargetUpdateRequired |
"etcd-peer-ci-op-9xx71rvq-1e28e-w667k-master-1" in "openshift-etcd" requires a new target cert/key pair: missing notAfter | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-788b5f7c5c-5tg6z |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e287ab32d1cbe1209ccedc3a31649d2a75d5a4a8097590d600e0f3f7db99fc5c" | |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-7777c4d45c-7lj9c |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-78b66d7c68-g6tds |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-78b66d7c68-g6tds to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7777c4d45c |
SuccessfulDelete |
Deleted pod: controller-manager-7777c4d45c-7lj9c | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5c89cb9bc9 |
SuccessfulCreate |
Created pod: controller-manager-5c89cb9bc9-j9bzk | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-78b66d7c68-kqbr5 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-78b66d7c68-kqbr5 to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-controller-manager |
default-scheduler |
controller-manager-5c89cb9bc9-j9bzk |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-78b66d7c68 |
SuccessfulCreate |
Created pod: route-controller-manager-78b66d7c68-fjzpk | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-78b66d7c68 |
SuccessfulCreate |
Created pod: route-controller-manager-78b66d7c68-kqbr5 | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-78b66d7c68-fjzpk |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-78b66d7c68-fjzpk to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
ConfigOperatorStatusChanged |
Operator conditions defaulted: [{OperatorAvailable True 2024-06-11 10:48:23 +0000 UTC AsExpected } {OperatorProgressing False 2024-06-11 10:48:23 +0000 UTC AsExpected } {OperatorUpgradeable True 2024-06-11 10:48:23 +0000 UTC AsExpected }] | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AdminNetworkPolicy=true,AlibabaPlatform=true,AutomatedEtcdBackup=false,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,CSIDriverSharedResource=false,ChunkSizeMiB=false,CloudDualStackNodeIPs=true,ClusterAPIInstall=false,ClusterAPIInstallAWS=true,ClusterAPIInstallAzure=false,ClusterAPIInstallGCP=false,ClusterAPIInstallIBMCloud=false,ClusterAPIInstallNutanix=true,ClusterAPIInstallOpenStack=true,ClusterAPIInstallPowerVS=false,ClusterAPIInstallVSphere=true,DNSNameResolver=false,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalCloudProvider=true,ExternalCloudProviderAzure=true,ExternalCloudProviderExternal=true,ExternalCloudProviderGCP=true,ExternalOIDC=false,ExternalRouteCertificate=false,GCPClusterHostedDNS=false,GCPLabelsTags=false,GatewayAPI=false,HardwareSpeed=true,ImagePolicy=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InstallAlternateInfrastructureAWS=false,KMSv1=true,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImages=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MetricsServer=true,MixedCPUsAllocation=false,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NewOLM=false,NodeDisruptionPolicy=false,NodeSwap=false,OnClusterBuild=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,PrivateHostedZoneAWS=true,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=false,VSphereStaticIPs=true,ValidatingAdmissionPolicy=false,VolumeGroupSnapshot=false | |
openshift-service-ca-operator |
service-ca-operator-resource-sync-controller-resourcesynccontroller |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-config-managed because it was missing | |
openshift-config-operator |
config-operator-kubecloudconfigcontroller |
openshift-config-operator |
KubeCloudConfigController |
openshift-config-managed/kube-cloud-config ConfigMap was updated | |
openshift-config-operator |
config-operator-kubecloudconfigcontroller |
openshift-config-operator |
ConfigMapCreated |
Created ConfigMap/kube-cloud-config -n openshift-config-managed because it was missing | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-78b66d7c68 |
SuccessfulCreate |
Created pod: route-controller-manager-78b66d7c68-g6tds | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "operator" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "feature-gates" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
SecretCreated |
Created Secret/etcd-peer-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
SecretCreated |
Created Secret/etcd-serving-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
TargetUpdateRequired |
"etcd-serving-metrics-ci-op-9xx71rvq-1e28e-w667k-master-1" in "openshift-etcd" requires a new target cert/key pair: missing notAfter | |
openshift-controller-manager |
replicaset-controller |
controller-manager-58c5c594b9 |
SuccessfulCreate |
Created pod: controller-manager-58c5c594b9-s5vgm | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-5c89cb9bc9-j9bzk |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-5c89cb9bc9-j9bzk to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/v1 because it was missing |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7777c4d45c |
SuccessfulDelete |
Deleted pod: controller-manager-7777c4d45c-v4264 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
SecretUpdated |
Updated Secret/etcd-client -n openshift-etcd-operator because it changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
TargetUpdateRequired |
"etcd-serving-ci-op-9xx71rvq-1e28e-w667k-master-1" in "openshift-etcd" requires a new target cert/key pair: missing notAfter | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-metric-client -n openshift-etcd-operator because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources |
etcd-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources |
etcd-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources |
etcd-operator |
ServiceUpdated |
Updated Service/etcd -n openshift-etcd because it changed | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-7777c4d45c to 1 from 2 | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded message changed from "AzureDiskCSIDriverOperatorDeploymentDegraded: deployment openshift-cluster-csi-drivers/azure-disk-csi-driver-operator has some pods failing; unavailable replicas=1" to "All is well" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-58c5c594b9 to 1 from 0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-controller-manager because it changed | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 0, desired generation is 1.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 3\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.openshift-global-ca.configmap | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nRevisionControllerDegraded: configmap \"audit\" not found" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 0, desired generation is 1.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 3\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-operator-7fcb8db8c9-bmkwq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b7fcff180a9d703eaff4eed0aaa4879bc21f6ff1f39c55f4836a2a135eb5da44" in 4.098s (4.098s including waiting) | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "AzureDiskProgressing: Waiting for Deployment to deploy pods" to "AzureDiskProgressing: Waiting for Deployment to deploy pods\nAzureFileProgressing: Waiting for Deployment to act on changes" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-operator-66b9ff7945-fpvl2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d132ab75ab591682220976b04e6e82e37482fa971fd9e3576f8f144095897eec" in 4.024s (4.024s including waiting) | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
CSRCreated |
A csr "system:openshift:openshift-authenticator-7sbh4" is created for OpenShiftAuthenticatorCertRequester | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources |
openshift-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator |
authentication-operator |
CSRApproval |
The CSR "system:openshift:openshift-authenticator-7sbh4" has been approved | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-788b5f7c5c-5tg6z |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e287ab32d1cbe1209ccedc3a31649d2a75d5a4a8097590d600e0f3f7db99fc5c" in 2.423s (2.423s including waiting) | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources |
openshift-apiserver-operator |
NamespaceCreated |
Created Namespace/openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/audit -n openshift-apiserver: namespaces "openshift-apiserver" not found | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/etcd-serving-ca -n openshift-apiserver: namespaces "openshift-apiserver" not found | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-788b5f7c5c-5tg6z |
Created |
Created container migrator | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/etcd-backup-sa -n openshift-etcd because it was missing | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-788b5f7c5c-5tg6z |
Started |
Started container migrator | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-apiserver because it was missing | |
| (x2) | openshift-controller-manager |
default-scheduler |
controller-manager-58c5c594b9-s5vgm |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver namespace | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources |
etcd-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:operator:etcd-backup-role because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources |
etcd-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:etcd-backup-crb because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
SecretCreated |
Created Secret/etcd-serving-metrics-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
TargetUpdateRequired |
"etcd-peer-ci-op-9xx71rvq-1e28e-w667k-master-2" in "openshift-etcd" requires a new target cert/key pair: missing notAfter | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
SecretCreated |
Created Secret/etcd-peer-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
TargetUpdateRequired |
"etcd-serving-ci-op-9xx71rvq-1e28e-w667k-master-2" in "openshift-etcd" requires a new target cert/key pair: missing notAfter | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-oauth-apiserver namespace | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources |
openshift-apiserver-operator |
ServiceCreated |
Created Service/api -n openshift-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
SecretCreated |
Created Secret/etcd-serving-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
TargetUpdateRequired |
"etcd-serving-metrics-ci-op-9xx71rvq-1e28e-w667k-master-2" in "openshift-etcd" requires a new target cert/key pair: missing notAfter | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources |
openshift-apiserver-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/openshift-apiserver-pdb -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-oauth-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources |
openshift-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver: namespaces "openshift-oauth-apiserver" not found | |
| (x18) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveServiceCAConfigMap |
observed change in config |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-service-ca |
kubelet |
service-ca-6ff4f55f67-szqhs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f80df79d4e101968318c99f4f8bf6afc7c3729d2c1bf8eaf1fe3894bf8ff066" in 2.789s (2.789s including waiting) | |
openshift-service-ca |
kubelet |
service-ca-6ff4f55f67-szqhs |
Created |
Created container service-ca-controller | |
openshift-service-ca |
kubelet |
service-ca-6ff4f55f67-szqhs |
Started |
Started container service-ca-controller | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator |
azure-disk-csi-driver-operator-lock |
LeaderElection |
azure-disk-csi-driver-operator-7fcb8db8c9-bmkwq_43fa35b0-b444-40cd-9bae-f0e7873979b3 became leader | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-nodecontroller |
kube-apiserver-operator |
MasterNodeObserved |
Observed new master node ci-op-9xx71rvq-1e28e-w667k-master-2 |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-nodecontroller |
kube-apiserver-operator |
MasterNodeObserved |
Observed new master node ci-op-9xx71rvq-1e28e-w667k-master-1 |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-nodecontroller |
kube-apiserver-operator |
MasterNodeObserved |
Observed new master node ci-op-9xx71rvq-1e28e-w667k-master-0 |
| (x5) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-6ff94d4dc8-5vlq8 |
FailedMount |
MountVolume.SetUp failed for volume "certs" : secret "csi-snapshot-webhook-secret" not found |
| (x11) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing | |
| (x5) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-6ff94d4dc8-dzpl8 |
FailedMount |
MountVolume.SetUp failed for volume "certs" : secret "csi-snapshot-webhook-secret" not found |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator |
azure-file-csi-driver-operator-lock |
LeaderElection |
azure-file-csi-driver-operator-66b9ff7945-fpvl2_75d7716e-3c97-4343-a385-5f90b03e5920 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: missing notAfter | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator |
azure-file-csi-driver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-csi-config-observer-controller-azurefiledrivercsiconfigobservercontroller-config-observer-configobserver |
azure-file-csi-driver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-csi-config-observer-controller-azurefiledrivercsiconfigobservercontroller-config-observer-configobserver |
azure-file-csi-driver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-csi-config-observer-controller-azurefiledrivercsiconfigobservercontroller-config-observer-configobserver |
azure-file-csi-driver-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "targetcsiconfig": map[string]any{ +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM"...), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_S"...), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM"...), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_S"...), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, +Â }, Â Â } | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator |
azure-file-csi-driver-operator |
StorageClassCreated |
Created StorageClass.storage.k8s.io/azurefile-csi because it was missing | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-csi-driver-node-service_azurefiledrivernodeservicecontroller-azurefiledrivernodeservicecontroller |
azure-file-csi-driver-operator |
DaemonSetCreated |
Created DaemonSet.apps/azure-file-csi-driver-node -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivergueststaticresourcescontroller-azurefiledrivergueststaticresourcescontroller |
azure-file-csi-driver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/azure-file-privileged-role because it was missing | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivercontrolplanestaticresourcescontroller-azurefiledrivercontrolplanestaticresourcescontroller |
azure-file-csi-driver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/azure-file-kube-rbac-proxy-role because it was missing | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivercontrollerservicecontroller-deployment-controller--azurefiledrivercontrollerservicecontroller |
azure-file-csi-driver-operator |
DeploymentCreated |
Created Deployment.apps/azure-file-csi-driver-controller -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-resource-sync-controller-resourcesynccontroller |
azure-file-csi-driver-operator |
ConfigMapCreated |
Created ConfigMap/azure-cloud-config -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivergueststaticresourcescontroller-azurefiledrivergueststaticresourcescontroller |
azure-file-csi-driver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/azure-file-csi-driver-role because it was missing | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivercontrolplanestaticresourcescontroller-azurefiledrivercontrolplanestaticresourcescontroller |
azure-file-csi-driver-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/azure-file-csi-driver-prometheus -n openshift-cluster-csi-drivers because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well" | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-cluster-csi-drivers |
deployment-controller |
azure-file-csi-driver-controller |
ScalingReplicaSet |
Scaled up replica set azure-file-csi-driver-controller-57bfbc5977 to 2 | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found\nAuditPolicyDegraded: namespaces \"openshift-apiserver\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
SecretCreated |
Created Secret/etcd-serving-metrics-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller |
etcd-operator |
SecretUpdated |
Updated Secret/etcd-all-certs -n openshift-etcd because it changed | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "extendedArguments": map[string]any{"cluster-cidr": []any{string("10.128.0.0/14")}, "cluster-name": []any{string("ci-op-9xx71rvq-1e28e-w667k")}, "feature-gates": []any{string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AutomatedEtcdBackup=false"), string("AzureWorkloadIdentity=true"), ...}, "service-cluster-ip-range": []any{string("172.30.0.0/16")}}, "featureGates": []any{string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AutomatedEtcdBackup=false"), string("AzureWorkloadIdentity=true"), ...}, + "serviceServingCert": map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), + }, "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), ...}, "minTLSVersion": string("VersionTLS12")}, } |
openshift-controller-manager |
default-scheduler |
controller-manager-58c5c594b9-s5vgm |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-58c5c594b9-s5vgm to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivergueststaticresourcescontroller-azurediskdrivergueststaticresourcescontroller |
azure-disk-csi-driver-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/azure-disk-csi-driver-lease-leader-election -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivercontrolplanestaticresourcescontroller-azurediskdrivercontrolplanestaticresourcescontroller |
azure-disk-csi-driver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/azure-disk-kube-rbac-proxy-role because it was missing | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivercontrollerservicecontroller-deployment-controller--azurediskdrivercontrollerservicecontroller |
azure-disk-csi-driver-operator |
DeploymentCreated |
Created Deployment.apps/azure-disk-csi-driver-controller -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
deployment-controller |
azure-disk-csi-driver-controller |
ScalingReplicaSet |
Scaled up replica set azure-disk-csi-driver-controller-75b96cdcf6 to 2 | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-csi-driver-node-service_azurediskdrivernodeservicecontroller-azurediskdrivernodeservicecontroller |
azure-disk-csi-driver-operator |
DaemonSetCreated |
Created DaemonSet.apps/azure-disk-csi-driver-node -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator |
azure-disk-csi-driver-operator |
StorageClassCreated |
Created StorageClass.storage.k8s.io/managed-csi because it was missing | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-csi-config-observer-controller-azurediskdrivercsiconfigobservercontroller-config-observer-configobserver |
azure-disk-csi-driver-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "targetcsiconfig": map[string]any{ +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM"...), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_S"...), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM"...), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_S"...), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, +Â }, Â Â } | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-csi-config-observer-controller-azurediskdrivercsiconfigobservercontroller-config-observer-configobserver |
azure-disk-csi-driver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-csi-config-observer-controller-azurediskdrivercsiconfigobservercontroller-config-observer-configobserver |
azure-disk-csi-driver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator |
azure-disk-csi-driver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorVersionChanged |
clusteroperator/service-ca version "operator" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing | |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-5c89cb9bc9-j9bzk |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivercontrolplanestaticresourcescontroller-azurediskdrivercontrolplanestaticresourcescontroller |
azure-disk-csi-driver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/azure-disk-csi-driver-prometheus -n openshift-cluster-csi-drivers because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated") | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.16.0-0.nightly-2024-06-10-211334"}] | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/snapshot.storage.k8s.io because it changed | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
| (x9) | openshift-cluster-csi-drivers |
replicaset-controller |
azure-file-csi-driver-controller-57bfbc5977 |
FailedCreate |
Error creating: pods "azure-file-csi-driver-controller-57bfbc5977-" is forbidden: error looking up service account openshift-cluster-csi-drivers/azure-file-csi-driver-controller-sa: serviceaccount "azure-file-csi-driver-controller-sa" not found |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivergueststaticresourcescontroller-azurediskdrivergueststaticresourcescontroller |
azure-disk-csi-driver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/azure-disk-privileged-role because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" to "APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-authentication because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
RevisionCreate |
Revision 1 created because configmap "audit-0" not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-authentication namespace | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found\nAuditPolicyDegraded: namespaces \"openshift-apiserver\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivergueststaticresourcescontroller-azurefiledrivergueststaticresourcescontroller |
azure-file-csi-driver-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/azure-file-csi-driver-lease-leader-election -n openshift-cluster-csi-drivers because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivercontrolplanestaticresourcescontroller-azurediskdrivercontrolplanestaticresourcescontroller |
azure-disk-csi-driver-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/azure-disk-csi-driver-prometheus -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivercontrolplanestaticresourcescontroller-azurefiledrivercontrolplanestaticresourcescontroller |
azure-file-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/azure-file-kube-rbac-proxy-binding because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftAuthenticatorCertRequester is available | |
| (x9) | openshift-cluster-csi-drivers |
replicaset-controller |
azure-disk-csi-driver-controller-75b96cdcf6 |
FailedCreate |
Error creating: pods "azure-disk-csi-driver-controller-75b96cdcf6-" is forbidden: error looking up service account openshift-cluster-csi-drivers/azure-disk-csi-driver-controller-sa: serviceaccount "azure-disk-csi-driver-controller-sa" not found |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources |
authentication-operator |
ServiceCreated |
Created Service/api -n openshift-oauth-apiserver because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "AzureDiskProgressing: Waiting for Deployment to deploy pods\nAzureFileProgressing: Waiting for Deployment to act on changes" to "AzureDiskProgressing: Waiting for Deployment to deploy pods" | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-6ff4f55f67-szqhs_44b11d4f-79bb-4c76-90dc-6ed199b63e48 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator: configmaps "loadbalancer-serving-ca" already exists | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivercontrolplanestaticresourcescontroller-azurefiledrivercontrolplanestaticresourcescontroller |
azure-file-csi-driver-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/v1 because it was missing | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivercontrolplanestaticresourcescontroller-azurefiledrivercontrolplanestaticresourcescontroller |
azure-file-csi-driver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/azure-file-csi-driver-prometheus -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivercontrolplanestaticresourcescontroller-azurefiledrivercontrolplanestaticresourcescontroller |
azure-file-csi-driver-operator |
ServiceCreated |
Created Service/azure-file-csi-driver-controller-metrics -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivercontrolplanestaticresourcescontroller-azurediskdrivercontrolplanestaticresourcescontroller |
azure-disk-csi-driver-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/v1 because it was missing | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivercontrolplanestaticresourcescontroller-azurediskdrivercontrolplanestaticresourcescontroller |
azure-disk-csi-driver-operator |
ConfigMapCreated |
Created ConfigMap/azure-disk-csi-driver-trusted-ca-bundle -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivergueststaticresourcescontroller-azurefiledrivergueststaticresourcescontroller |
azure-file-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/azure-file-csi-main-attacher-binding because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-csi-drivers |
replicaset-controller |
azure-file-csi-driver-controller-5fdb6df78c |
SuccessfulCreate |
Created pod: azure-file-csi-driver-controller-5fdb6df78c-dspvm | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing | |
openshift-cluster-csi-drivers |
default-scheduler |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-file-csi-driver-controller-5fdb6df78c-dspvm to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
| (x2) | openshift-cluster-csi-drivers |
controllermanager |
azure-file-csi-driver-controller-pdb |
NoPods |
No matching pods found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-cluster-csi-drivers |
replicaset-controller |
azure-file-csi-driver-controller-57bfbc5977 |
SuccessfulCreate |
Created pod: azure-file-csi-driver-controller-57bfbc5977-cm657 | |
openshift-cluster-csi-drivers |
default-scheduler |
azure-file-csi-driver-controller-57bfbc5977-cm657 |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-file-csi-driver-controller-57bfbc5977-cm657 to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
| (x2) | openshift-cluster-csi-drivers |
controllermanager |
azure-disk-csi-driver-controller-pdb |
NoPods |
No matching pods found |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivergueststaticresourcescontroller-azurediskdrivergueststaticresourcescontroller |
azure-disk-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/azure-disk-csi-main-resizer-binding because it was missing | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivercontrolplanestaticresourcescontroller-azurediskdrivercontrolplanestaticresourcescontroller |
azure-disk-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/azure-disk-kube-rbac-proxy-binding because it was missing | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivercontrolplanestaticresourcescontroller-azurediskdrivercontrolplanestaticresourcescontroller |
azure-disk-csi-driver-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/azure-disk-csi-driver-controller-pdb -n openshift-cluster-csi-drivers because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivercontrolplanestaticresourcescontroller-azurediskdrivercontrolplanestaticresourcescontroller |
azure-disk-csi-driver-operator |
ServiceAccountCreated |
Created ServiceAccount/azure-disk-csi-driver-controller-sa -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivercontrolplanestaticresourcescontroller-azurefiledrivercontrolplanestaticresourcescontroller |
azure-file-csi-driver-operator |
ConfigMapCreated |
Created ConfigMap/azure-file-csi-driver-trusted-ca-bundle -n openshift-cluster-csi-drivers because it was missing | |
| (x5) | openshift-controller-manager |
kubelet |
controller-manager-7777c4d45c-h8m2s |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivercontrolplanestaticresourcescontroller-azurefiledrivercontrolplanestaticresourcescontroller |
azure-file-csi-driver-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/azure-file-csi-driver-controller-pdb -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivercontrolplanestaticresourcescontroller-azurediskdrivercontrolplanestaticresourcescontroller |
azure-disk-csi-driver-operator |
ServiceCreated |
Created Service/azure-disk-csi-driver-controller-metrics -n openshift-cluster-csi-drivers because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
| (x3) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator: configmaps "kube-control-plane-signer-ca" already exists |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivercontrolplanestaticresourcescontroller-azurefiledrivercontrolplanestaticresourcescontroller |
azure-file-csi-driver-operator |
ServiceAccountCreated |
Created ServiceAccount/azure-file-csi-driver-controller-sa -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
deployment-controller |
azure-file-csi-driver-controller |
ScalingReplicaSet |
Scaled up replica set azure-file-csi-driver-controller-5fdb6df78c to 1 from 0 | |
openshift-cluster-csi-drivers |
deployment-controller |
azure-file-csi-driver-controller |
ScalingReplicaSet |
Scaled down replica set azure-file-csi-driver-controller-57bfbc5977 to 1 from 2 | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivergueststaticresourcescontroller-azurefiledrivergueststaticresourcescontroller |
azure-file-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/azure-file-csi-main-resizer-binding because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ServiceCreated |
Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-webhook-5b799d8d59-l5phw |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-webhook-5b799d8d59-l5phw to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-webhook-5b799d8d59-l5phw |
AddedInterface |
Add eth0 [10.129.0.41/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-5b799d8d59-l5phw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f1332e8473ee587a634b23e555a97a472180466f100f68512d95bde33ad3a22" | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-csi-driver-node-service_azurefiledrivernodeservicecontroller-azurefiledrivernodeservicecontroller |
azure-file-csi-driver-operator |
DaemonSetUpdated |
Updated DaemonSet.apps/azure-file-csi-driver-node -n openshift-cluster-csi-drivers because it changed | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-webhook-5b799d8d59 |
SuccessfulCreate |
Created pod: csi-snapshot-webhook-5b799d8d59-l5phw | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" to "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes" | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-cluster-csi-drivers |
replicaset-controller |
azure-disk-csi-driver-controller-75b96cdcf6 |
SuccessfulCreate |
Created pod: azure-disk-csi-driver-controller-75b96cdcf6-q94cl | |
openshift-cluster-csi-drivers |
replicaset-controller |
azure-disk-csi-driver-controller-75b96cdcf6 |
SuccessfulCreate |
Created pod: azure-disk-csi-driver-controller-75b96cdcf6-vc6f8 | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources |
authentication-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-cluster-csi-drivers |
default-scheduler |
azure-disk-csi-driver-controller-75b96cdcf6-vc6f8 |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-disk-csi-driver-controller-75b96cdcf6-vc6f8 to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-cluster-csi-drivers |
default-scheduler |
azure-disk-csi-driver-controller-75b96cdcf6-q94cl |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-disk-csi-driver-controller-75b96cdcf6-q94cl to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources |
authentication-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/oauth-apiserver-pdb -n openshift-oauth-apiserver because it was missing | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivergueststaticresourcescontroller-azurediskdrivergueststaticresourcescontroller |
azure-disk-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/azure-disk-csi-volumesnapshot-reader-provisioner-binding because it was missing | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-csi-driver-node-service_azurediskdrivernodeservicecontroller-azurediskdrivernodeservicecontroller |
azure-disk-csi-driver-operator |
DaemonSetUpdated |
Updated DaemonSet.apps/azure-disk-csi-driver-node -n openshift-cluster-csi-drivers because it changed | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/cloud-config -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-webhook-6ff94d4dc8 |
SuccessfulDelete |
Deleted pod: csi-snapshot-webhook-6ff94d4dc8-5vlq8 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivergueststaticresourcescontroller-azurefiledrivergueststaticresourcescontroller |
azure-file-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/azure-file-csi-main-provisioner-binding because it was missing | |
| (x12) | openshift-cluster-csi-drivers |
daemonset-controller |
azure-disk-csi-driver-node |
FailedCreate |
Error creating: pods "azure-disk-csi-driver-node-" is forbidden: error looking up service account openshift-cluster-csi-drivers/azure-disk-csi-driver-node-sa: serviceaccount "azure-disk-csi-driver-node-sa" not found |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-webhook |
ScalingReplicaSet |
Scaled down replica set csi-snapshot-webhook-6ff94d4dc8 to 1 from 2 | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-webhook |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-webhook-5b799d8d59 to 1 from 0 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady" | |
| (x2) | openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotwebhookcontroller-deployment-controller--csisnapshotwebhookcontroller |
csi-snapshot-controller-operator |
DeploymentUpdated |
Updated Deployment.apps/csi-snapshot-webhook -n openshift-cluster-storage-operator because it changed |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivergueststaticresourcescontroller-azurediskdrivergueststaticresourcescontroller |
azure-disk-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/azure-disk-csi-main-attacher-binding because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "All is well" to "NodeInstallerProgressing: 3 nodes are at revision 0",Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0" | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivergueststaticresourcescontroller-azurediskdrivergueststaticresourcescontroller |
azure-disk-csi-driver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/azure-disk-csi-driver-lease-leader-election -n openshift-cluster-csi-drivers because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" to "APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivergueststaticresourcescontroller-azurediskdrivergueststaticresourcescontroller |
azure-disk-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/azure-disk-csi-main-snapshotter-binding because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources |
authentication-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources |
authentication-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources |
authentication-operator |
ServiceCreated |
Created Service/oauth-openshift -n openshift-authentication because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nRevisionControllerDegraded: configmap \"audit\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivergueststaticresourcescontroller-azurefiledrivergueststaticresourcescontroller |
azure-file-csi-driver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/azure-file-csi-driver-lease-leader-election -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivergueststaticresourcescontroller-azurefiledrivergueststaticresourcescontroller |
azure-file-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/azure-file-csi-driver-binding because it was missing | |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-58c5c594b9-s5vgm |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivergueststaticresourcescontroller-azurediskdrivergueststaticresourcescontroller |
azure-disk-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/azure-disk-csi-storageclass-reader-resizer-binding because it was missing | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivergueststaticresourcescontroller-azurediskdrivergueststaticresourcescontroller |
azure-disk-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/azure-disk-node-privileged-binding because it was missing | |
| (x19) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0 |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivergueststaticresourcescontroller-azurediskdrivergueststaticresourcescontroller |
azure-disk-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/azure-disk-csi-main-provisioner-binding because it was missing | |
| (x7) | openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-4v6xp |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
| (x7) | openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-zpcvg |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivergueststaticresourcescontroller-azurediskdrivergueststaticresourcescontroller |
azure-disk-csi-driver-operator |
CSIDriverCreated |
Created CSIDriver.storage.k8s.io/disk.csi.azure.com because it was missing | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivergueststaticresourcescontroller-azurediskdrivergueststaticresourcescontroller |
azure-disk-csi-driver-operator |
ServiceAccountCreated |
Created ServiceAccount/azure-disk-csi-driver-node-sa -n openshift-cluster-csi-drivers because it was missing | |
| (x7) | openshift-machine-api |
kubelet |
machine-api-operator-6f847dd5f5-wqkzk |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : secret "machine-api-operator-tls" not found |
| (x7) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6475c74794-8hd5r |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : secret "cluster-baremetal-operator-tls" not found |
| (x7) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6475c74794-8hd5r |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "cluster-baremetal-webhook-server-cert" not found |
openshift-cluster-csi-drivers |
replicaset-controller |
azure-disk-csi-driver-controller-75b96cdcf6 |
SuccessfulDelete |
Deleted pod: azure-disk-csi-driver-controller-75b96cdcf6-vc6f8 | |
openshift-cluster-csi-drivers |
default-scheduler |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
| (x7) | openshift-machine-config-operator |
kubelet |
machine-config-operator-6d64fdfbc-xtlls |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : secret "mco-proxy-tls" not found |
| (x7) | openshift-marketplace |
kubelet |
marketplace-operator-867c6b6ccc-rmltl |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
| (x7) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-799db46f99-r6f42 |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
| (x7) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-fffbcbd5b-hpsfj |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "cluster-autoscaler-operator-cert" not found |
| (x13) | openshift-cluster-csi-drivers |
daemonset-controller |
azure-file-csi-driver-node |
FailedCreate |
Error creating: pods "azure-file-csi-driver-node-" is forbidden: error looking up service account openshift-cluster-csi-drivers/azure-file-csi-driver-node-sa: serviceaccount "azure-file-csi-driver-node-sa" not found |
openshift-cluster-csi-drivers |
default-scheduler |
azure-disk-csi-driver-node-54zjc |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-disk-csi-driver-node-54zjc to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
| (x7) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-86c67755bb-2b7lz |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-csi-drivers |
default-scheduler |
azure-disk-csi-driver-node-fzdwd |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-disk-csi-driver-node-fzdwd to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
| (x7) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7c88c666f8-r2wz4 |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found |
| (x7) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-9958db496-pgws2 |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found |
| (x7) | openshift-ingress-operator |
kubelet |
ingress-operator-66bb9945d4-25hsj |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
openshift-cluster-csi-drivers |
replicaset-controller |
azure-disk-csi-driver-controller-79dc6dfd8f |
SuccessfulCreate |
Created pod: azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-596f48f6bd-s4v8t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5a0b342d2946d03911c22f02d11d555d9c3650769380e160f0628ff97bd9f8" | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-596f48f6bd-s4v8t |
AddedInterface |
Add eth0 [10.129.0.34/23] from ovn-kubernetes | |
openshift-cluster-csi-drivers |
deployment-controller |
azure-disk-csi-driver-controller |
ScalingReplicaSet |
Scaled down replica set azure-disk-csi-driver-controller-75b96cdcf6 to 1 from 2 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ServiceCreated |
Created Service/apiserver -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8477dc5fd6-82ddm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ece0819f12f73bd0f0a7c2b2d8034aeb5a68929dec9044efe8d6971a779f3ffd" | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8477dc5fd6-82ddm |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8477dc5fd6-82ddm |
Created |
Created container kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8477dc5fd6-82ddm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
daemonset-controller |
azure-disk-csi-driver-node |
SuccessfulCreate |
Created pod: azure-disk-csi-driver-node-54zjc | |
openshift-cluster-csi-drivers |
daemonset-controller |
azure-disk-csi-driver-node |
SuccessfulCreate |
Created pod: azure-disk-csi-driver-node-s64hj | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionCreate |
Revision 1 created because configmap "kube-scheduler-pod-0" not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivergueststaticresourcescontroller-azurefiledrivergueststaticresourcescontroller |
azure-file-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/azure-file-csi-storageclass-reader-resizer-binding because it was missing | |
openshift-cluster-csi-drivers |
daemonset-controller |
azure-disk-csi-driver-node |
SuccessfulCreate |
Created pod: azure-disk-csi-driver-node-fzdwd | |
openshift-cluster-csi-drivers |
default-scheduler |
azure-disk-csi-driver-node-s64hj |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-disk-csi-driver-node-s64hj to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-cluster-csi-drivers |
deployment-controller |
azure-disk-csi-driver-controller |
ScalingReplicaSet |
Scaled up replica set azure-disk-csi-driver-controller-79dc6dfd8f to 1 from 0 | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-s64hj |
Started |
Started container azure-inject-credentials | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivergueststaticresourcescontroller-azurefiledrivergueststaticresourcescontroller |
azure-file-csi-driver-operator |
ServiceAccountCreated |
Created ServiceAccount/azure-file-csi-driver-node-sa -n openshift-cluster-csi-drivers because it was missing | |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-9d764bfb9-w5dr5 |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found |
| (x5) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-nodecontroller |
openshift-kube-scheduler-operator |
MasterNodeObserved |
Observed new master node ci-op-9xx71rvq-1e28e-w667k-master-0 |
| (x5) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-nodecontroller |
openshift-kube-scheduler-operator |
MasterNodeObserved |
Observed new master node ci-op-9xx71rvq-1e28e-w667k-master-1 |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7b984c96f7-zjwpp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cloud-credential-operator |
multus |
cloud-credential-operator-7b984c96f7-zjwpp |
AddedInterface |
Add eth0 [10.129.0.32/23] from ovn-kubernetes | |
| (x5) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-nodecontroller |
openshift-kube-scheduler-operator |
MasterNodeObserved |
Observed new master node ci-op-9xx71rvq-1e28e-w667k-master-2 |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-s64hj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b19f2d14cd886282f9e0307d8d6332af732ffab98ac5322a35a918121f2fad4" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-s64hj |
Created |
Created container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-s64hj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivergueststaticresourcescontroller-azurefiledrivergueststaticresourcescontroller |
azure-file-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/azure-file-node-privileged-binding because it was missing | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivergueststaticresourcescontroller-azurefiledrivergueststaticresourcescontroller |
azure-file-csi-driver-operator |
CSIDriverCreated |
Created CSIDriver.storage.k8s.io/file.csi.azure.com because it was missing | |
| (x4) | openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-75b96cdcf6-vc6f8 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-serving-cert" : secret "azure-disk-csi-driver-controller-metrics-serving-cert" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: missing notAfter | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 |
| (x6) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7f9c9cfdd9-6d8wg |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : secret "control-plane-machine-set-operator-tls" not found |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-54zjc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-54zjc |
Created |
Created container azure-inject-credentials | |
openshift-dns-operator |
kubelet |
dns-operator-6897b57cbf-6t6wl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c73bebec6244b5a77060113a06c93d1adbe1fdfe239aaf4e920ae895133eb6a" | |
openshift-dns-operator |
multus |
dns-operator-6897b57cbf-6t6wl |
AddedInterface |
Add eth0 [10.129.0.36/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-5b799d8d59-l5phw |
Started |
Started container webhook | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-5b799d8d59-l5phw |
Created |
Created container webhook | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-fzdwd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b19f2d14cd886282f9e0307d8d6332af732ffab98ac5322a35a918121f2fad4" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-fzdwd |
Started |
Started container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-fzdwd |
Created |
Created container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-fzdwd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-5b799d8d59-l5phw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f1332e8473ee587a634b23e555a97a472180466f100f68512d95bde33ad3a22" in 2.669s (2.669s including waiting) | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-54zjc |
Started |
Started container azure-inject-credentials | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7b984c96f7-zjwpp |
Started |
Started container kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded message changed from "All is well" to "AzureDiskCSIDriverOperatorDeploymentDegraded: Operation cannot be fulfilled on storages.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-webhook |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-webhook-5b799d8d59 to 2 from 1 | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7b984c96f7-zjwpp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a30ea962d46fa29a514e560b9bf52820c3eb906e23fa6bc5c199252a293b82d1" | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-webhook |
ScalingReplicaSet |
Scaled down replica set csi-snapshot-webhook-6ff94d4dc8 to 0 from 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Available changed from False to True ("All is well") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" to "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to update pods" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to update pods" to "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-webhook-6ff94d4dc8 |
SuccessfulDelete |
Deleted pod: csi-snapshot-webhook-6ff94d4dc8-dzpl8 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-webhook-5b799d8d59-kj7nt |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-webhook-5b799d8d59-kj7nt to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-webhook-5b799d8d59 |
SuccessfulCreate |
Created pod: csi-snapshot-webhook-5b799d8d59-kj7nt | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7b984c96f7-zjwpp |
Created |
Created container kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-54zjc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b19f2d14cd886282f9e0307d8d6332af732ffab98ac5322a35a918121f2fad4" | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing changed from True to False ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-5b799d8d59-kj7nt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f1332e8473ee587a634b23e555a97a472180466f100f68512d95bde33ad3a22" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded message changed from "AzureDiskCSIDriverOperatorDeploymentDegraded: Operation cannot be fulfilled on storages.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "All is well" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8477dc5fd6-82ddm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ece0819f12f73bd0f0a7c2b2d8034aeb5a68929dec9044efe8d6971a779f3ffd" in 2.958s (2.958s including waiting) | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-webhook-5b799d8d59-kj7nt |
AddedInterface |
Add eth0 [10.128.0.16/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing | |
openshift-cluster-machine-approver |
ci-op-9xx71rvq-1e28e-w667k-master-1_21403368-bb3e-46de-b1ff-4a89391b3965 |
cluster-machine-approver-leader |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-1_21403368-bb3e-46de-b1ff-4a89391b3965 became leader | |
openshift-dns-operator |
kubelet |
dns-operator-6897b57cbf-6t6wl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-dns-operator |
kubelet |
dns-operator-6897b57cbf-6t6wl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c73bebec6244b5a77060113a06c93d1adbe1fdfe239aaf4e920ae895133eb6a" in 3.114s (3.114s including waiting) | |
openshift-dns-operator |
kubelet |
dns-operator-6897b57cbf-6t6wl |
Created |
Created container dns-operator | |
openshift-dns-operator |
kubelet |
dns-operator-6897b57cbf-6t6wl |
Started |
Started container dns-operator | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded message changed from "All is well" to "AzureDiskCSIDriverOperatorCRDegraded: All is well",Progressing changed from False to True ("AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to act on changes\nAzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverNodeServiceControllerProgressing: Waiting for DaemonSet to act on changes"),Available changed from True to False ("AzureDiskCSIDriverOperatorCRAvailable: AzureDiskDriverControllerServiceControllerAvailable: Waiting for Deployment\nAzureDiskCSIDriverOperatorCRAvailable: AzureDiskDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-dns-operator |
cluster-dns-operator |
dns-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 2 triggered by "optional secret/serving-cert has been created" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-kube-apiserver because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-s64hj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b19f2d14cd886282f9e0307d8d6332af732ffab98ac5322a35a918121f2fad4" in 3.155s (3.155s including waiting) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-kmxpr | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-p2bm7 | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-kl72g | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-vprmw | |
openshift-dns |
kubelet |
node-resolver-vprmw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-dns |
default-scheduler |
dns-default-tfrnn |
Scheduled |
Successfully assigned openshift-dns/dns-default-tfrnn to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-fzdwd |
Started |
Started container csi-driver | |
openshift-dns |
default-scheduler |
node-resolver-kl72g |
Scheduled |
Successfully assigned openshift-dns/node-resolver-kl72g to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-fzdwd |
Created |
Created container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-fzdwd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b19f2d14cd886282f9e0307d8d6332af732ffab98ac5322a35a918121f2fad4" in 3.314s (3.314s including waiting) | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing | |
openshift-dns |
default-scheduler |
node-resolver-vprmw |
Scheduled |
Successfully assigned openshift-dns/node-resolver-vprmw to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-tfrnn | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-9h9cc | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-fzdwd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" | |
openshift-dns |
default-scheduler |
dns-default-9h9cc |
Scheduled |
Successfully assigned openshift-dns/dns-default-9h9cc to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-dns |
default-scheduler |
node-resolver-p2bm7 |
Scheduled |
Successfully assigned openshift-dns/node-resolver-p2bm7 to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-s64hj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" | |
| (x6) | openshift-controller-manager |
kubelet |
controller-manager-7777c4d45c-h8m2s |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-dns-operator |
kubelet |
dns-operator-6897b57cbf-6t6wl |
Created |
Created container kube-rbac-proxy | |
openshift-dns-operator |
kubelet |
dns-operator-6897b57cbf-6t6wl |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
default-scheduler |
dns-default-kmxpr |
Scheduled |
Successfully assigned openshift-dns/dns-default-kmxpr to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-s64hj |
Started |
Started container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-s64hj |
Created |
Created container csi-driver | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-dns namespace | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
| (x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-78b66d7c68-fjzpk |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to act on changes\nAzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverNodeServiceControllerProgressing: Waiting for DaemonSet to act on changes" to "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" | |
openshift-dns |
kubelet |
node-resolver-vprmw |
Created |
Created container dns-node-resolver | |
openshift-dns |
kubelet |
node-resolver-p2bm7 |
Started |
Started container dns-node-resolver | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded message changed from "AzureDiskCSIDriverOperatorCRDegraded: All is well" to "AzureDiskCSIDriverOperatorCRDegraded: All is well\nAzureFileCSIDriverOperatorDegraded: Operation cannot be fulfilled on storages.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from Unknown to False ("AuthenticatorCertKeyProgressing: All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-5b799d8d59-kj7nt |
Started |
Started container webhook | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-5b799d8d59-kj7nt |
Created |
Created container webhook | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-5b799d8d59-kj7nt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f1332e8473ee587a634b23e555a97a472180466f100f68512d95bde33ad3a22" in 3.212s (3.212s including waiting) | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-dns |
kubelet |
node-resolver-vprmw |
Started |
Started container dns-node-resolver | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing | |
| (x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-78b66d7c68-g6tds |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-env-var-controller |
etcd-operator |
EnvVarControllerUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing | |
openshift-dns |
kubelet |
node-resolver-p2bm7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" already present on machine | |
openshift-dns |
kubelet |
node-resolver-p2bm7 |
Created |
Created container dns-node-resolver | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing | |
| (x7) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/v1 because it was missing |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-fzdwd |
Started |
Started container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-fzdwd |
Created |
Created container csi-node-driver-registrar | |
| (x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-78b66d7c68-kqbr5 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-s64hj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: csr-signer, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-s64hj |
Started |
Started container csi-node-driver-registrar | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: client-ca, secrets: csr-signer, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-fzdwd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" in 2.048s (2.048s including waiting) | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-s64hj |
Created |
Created container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-s64hj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" in 2.374s (2.374s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-fzdwd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded message changed from "AzureDiskCSIDriverOperatorCRDegraded: All is well\nAzureFileCSIDriverOperatorDegraded: Operation cannot be fulfilled on storages.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "AzureDiskCSIDriverOperatorCRDegraded: All is well\nAzureFileCSIDriverOperatorDegraded: Operation cannot be fulfilled on storages.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nAzureFileCSIDriverOperatorCRDegraded: All is well",Progressing message changed from "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" to "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods",Available message changed from "AzureDiskCSIDriverOperatorCRAvailable: AzureDiskDriverControllerServiceControllerAvailable: Waiting for Deployment\nAzureDiskCSIDriverOperatorCRAvailable: AzureDiskDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service" to "AzureDiskCSIDriverOperatorCRAvailable: AzureDiskDriverControllerServiceControllerAvailable: Waiting for Deployment\nAzureDiskCSIDriverOperatorCRAvailable: AzureDiskDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service\nAzureFileCSIDriverOperatorCRAvailable: AzureFileDriverControllerServiceControllerAvailable: Waiting for Deployment\nAzureFileCSIDriverOperatorCRAvailable: AzureFileDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded message changed from "AzureDiskCSIDriverOperatorCRDegraded: All is well\nAzureFileCSIDriverOperatorDegraded: Operation cannot be fulfilled on storages.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nAzureFileCSIDriverOperatorCRDegraded: All is well" to "AzureDiskCSIDriverOperatorCRDegraded: All is well\nAzureFileCSIDriverOperatorCRDegraded: All is well" | |
| (x5) | openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
FailedMount |
MountVolume.SetUp failed for volume "metrics-serving-cert" : secret "azure-disk-csi-driver-controller-metrics-serving-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [configmaps: check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-s64hj |
Created |
Created container csi-liveness-probe | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6d46446fb6 |
SuccessfulCreate |
Created pod: controller-manager-6d46446fb6-s4zxm | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-7777c4d45c to 0 from 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-6d46446fb6-s4zxm |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7777c4d45c |
SuccessfulDelete |
Deleted pod: controller-manager-7777c4d45c-h8m2s | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
| (x3) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-s64hj |
Started |
Started container csi-liveness-probe | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Available message changed from "AzureDiskCSIDriverOperatorCRAvailable: AzureDiskDriverControllerServiceControllerAvailable: Waiting for Deployment\nAzureDiskCSIDriverOperatorCRAvailable: AzureDiskDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service\nAzureFileCSIDriverOperatorCRAvailable: AzureFileDriverControllerServiceControllerAvailable: Waiting for Deployment\nAzureFileCSIDriverOperatorCRAvailable: AzureFileDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service" to "AzureDiskCSIDriverOperatorCRAvailable: AzureDiskDriverControllerServiceControllerAvailable: Waiting for Deployment\nAzureFileCSIDriverOperatorCRAvailable: AzureFileDriverControllerServiceControllerAvailable: Waiting for Deployment\nAzureFileCSIDriverOperatorCRAvailable: AzureFileDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-s64hj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" in 1.929s (1.929s including waiting) | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-6d46446fb6 to 1 from 0 | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-fzdwd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" in 2.124s (2.124s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-fzdwd |
Created |
Created container csi-liveness-probe | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-6d46446fb6-s4zxm |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-6d46446fb6-s4zxm to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-fzdwd |
Started |
Started container csi-liveness-probe | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "controlPlane": map[string]any{"replicas": float64(3)}, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"), + }, + "minTLSVersion": string("VersionTLS12"), + }, } |
| (x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0 |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nEnvVarControllerDegraded: empty NodeStatuses, can't generate environment for getEscapedIPAddress" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" | |
| (x9) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
secrets: kube-scheduler-client-cert-key |
| (x5) | openshift-dns |
kubelet |
dns-default-tfrnn |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found |
| (x5) | openshift-dns |
kubelet |
dns-default-kmxpr |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found |
| (x5) | openshift-dns |
kubelet |
dns-default-9h9cc |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionCreate |
Revision 2 created because optional secret/serving-cert has been created | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 2 triggered by "optional secret/serving-cert has been created" | |
| (x6) | openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
FailedMount |
MountVolume.SetUp failed for volume "metrics-serving-cert" : secret "azure-file-csi-driver-controller-metrics-serving-cert" not found |
| (x6) | openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-57bfbc5977-cm657 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-serving-cert" : secret "azure-file-csi-driver-controller-metrics-serving-cert" not found |
| (x6) | openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-75b96cdcf6-q94cl |
FailedMount |
MountVolume.SetUp failed for volume "metrics-serving-cert" : secret "azure-disk-csi-driver-controller-metrics-serving-cert" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-authentication because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
| (x3) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: client-ca, secrets: check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-cluster-csi-drivers |
default-scheduler |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-disk-csi-driver-controller-6d9996db94-26g2j to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-cluster-csi-drivers |
deployment-controller |
azure-disk-csi-driver-controller |
ScalingReplicaSet |
Scaled down replica set azure-disk-csi-driver-controller-75b96cdcf6 to 0 from 1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Started |
Started container azure-inject-credentials | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to act on changes\nAzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" to "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" | |
| (x2) | openshift-cluster-csi-drivers |
azure-file-csi-driver-operator-azurefiledrivercontrollerservicecontroller-deployment-controller--azurefiledrivercontrollerservicecontroller |
azure-file-csi-driver-operator |
DeploymentUpdated |
Updated Deployment.apps/azure-file-csi-driver-controller -n openshift-cluster-csi-drivers because it changed |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Created |
Created container azure-inject-credentials | |
openshift-cluster-csi-drivers |
default-scheduler |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-file-csi-driver-controller-7bf87ccd87-qcs5n to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-cluster-csi-drivers |
replicaset-controller |
azure-disk-csi-driver-controller-75b96cdcf6 |
SuccessfulDelete |
Deleted pod: azure-disk-csi-driver-controller-75b96cdcf6-q94cl | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" to "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to act on changes\nAzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b19f2d14cd886282f9e0307d8d6332af732ffab98ac5322a35a918121f2fad4" already present on machine | |
openshift-cluster-csi-drivers |
replicaset-controller |
azure-file-csi-driver-controller-7bf87ccd87 |
SuccessfulCreate |
Created pod: azure-file-csi-driver-controller-7bf87ccd87-qcs5n | |
openshift-cluster-csi-drivers |
deployment-controller |
azure-file-csi-driver-controller |
ScalingReplicaSet |
Scaled down replica set azure-file-csi-driver-controller-57bfbc5977 to 0 from 1 | |
openshift-cluster-csi-drivers |
deployment-controller |
azure-file-csi-driver-controller |
ScalingReplicaSet |
Scaled up replica set azure-file-csi-driver-controller-7bf87ccd87 to 1 from 0 | |
openshift-cluster-csi-drivers |
replicaset-controller |
azure-disk-csi-driver-controller-6d9996db94 |
SuccessfulCreate |
Created pod: azure-disk-csi-driver-controller-6d9996db94-26g2j | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Created |
Created container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Started |
Started container csi-driver | |
openshift-cluster-csi-drivers |
replicaset-controller |
azure-file-csi-driver-controller-57bfbc5977 |
SuccessfulDelete |
Deleted pod: azure-file-csi-driver-controller-57bfbc5977-cm657 | |
openshift-cluster-csi-drivers |
multus |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
AddedInterface |
Add eth0 [10.130.0.17/23] from ovn-kubernetes | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
| (x3) | openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator-azurediskdrivercontrollerservicecontroller-deployment-controller--azurediskdrivercontrollerservicecontroller |
azure-disk-csi-driver-operator |
DeploymentUpdated |
Updated Deployment.apps/azure-disk-csi-driver-controller -n openshift-cluster-csi-drivers because it changed |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
deployment-controller |
azure-disk-csi-driver-controller |
ScalingReplicaSet |
Scaled up replica set azure-disk-csi-driver-controller-6d9996db94 to 1 from 0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing | |
| (x3) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: client-ca, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Started |
Started container kube-rbac-proxy-8201 | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:418951fd0c8cc12783cc24b2f9c487b6bd277aee2cf182578bfca497a167063f" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: missing notAfter | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Created |
Created container kube-rbac-proxy-8201 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: csr-signer, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" to "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverControllerServiceControllerProgressing: Waiting for Deployment to act on changes\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverControllerServiceControllerProgressing: Waiting for Deployment to act on changes\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" to "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-dns |
kubelet |
node-resolver-kl72g |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" already present on machine | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7b984c96f7-zjwpp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a30ea962d46fa29a514e560b9bf52820c3eb906e23fa6bc5c199252a293b82d1" in 16.7s (16.7s including waiting) | |
openshift-cluster-csi-drivers |
multus |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
AddedInterface |
Add eth0 [10.129.0.45/23] from ovn-kubernetes | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Started |
Started container provisioner-kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:418951fd0c8cc12783cc24b2f9c487b6bd277aee2cf182578bfca497a167063f" in 2.53s (2.53s including waiting) | |
openshift-cluster-csi-drivers |
disk.csi.azure.com/1718102930831-6792-disk.csi.azure.com |
disk-csi-azure-com |
LeaderElection |
1718102930831-6792-disk-csi-azure-com became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-596f48f6bd-s4v8t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5a0b342d2946d03911c22f02d11d555d9c3650769380e160f0628ff97bd9f8" in 17.937s (17.937s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-54zjc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b19f2d14cd886282f9e0307d8d6332af732ffab98ac5322a35a918121f2fad4" in 16.241s (16.241s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:faee251eeaea85be146c2f8c0d3c1ab21611fc16e36f00b82906954bcaf30d26" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Created |
Created container csi-provisioner | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Started |
Started container csi-provisioner | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Created |
Created container provisioner-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-cluster-node-tuning-operator |
default-scheduler |
tuned-t9gr2 |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-t9gr2 to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-nodecontroller |
etcd-operator |
MasterNodeObserved |
Observed new master node ci-op-9xx71rvq-1e28e-w667k-master-2 |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-nodecontroller |
etcd-operator |
MasterNodeObserved |
Observed new master node ci-op-9xx71rvq-1e28e-w667k-master-1 |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-lbfm2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5a0b342d2946d03911c22f02d11d555d9c3650769380e160f0628ff97bd9f8" | |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-nodecontroller |
etcd-operator |
MasterNodeObserved |
Observed new master node ci-op-9xx71rvq-1e28e-w667k-master-0 |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7b984c96f7-zjwpp |
Created |
Created container cloud-credential-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Started |
Started container csi-driver | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-t9gr2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5a0b342d2946d03911c22f02d11d555d9c3650769380e160f0628ff97bd9f8" | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-7b984c96f7-zjwpp |
Started |
Started container cloud-credential-operator | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Created |
Created container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b19f2d14cd886282f9e0307d8d6332af732ffab98ac5322a35a918121f2fad4" already present on machine | |
openshift-dns |
kubelet |
node-resolver-kl72g |
Created |
Created container dns-node-resolver | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-5m72k | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-lbfm2 | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Created |
Created container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Started |
Started container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:70fe518883175c417f736849278c0b614ba907ce768d4f069f9ff16bdcf4b2b7" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-54zjc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Started |
Started container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Created |
Created container azure-inject-credentials | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-csi-drivers |
multus |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
AddedInterface |
Add eth0 [10.129.0.42/23] from ovn-kubernetes | |
openshift-dns |
kubelet |
node-resolver-kl72g |
Started |
Started container dns-node-resolver | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-54zjc |
Started |
Started container csi-driver | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-node-tuning-operator |
default-scheduler |
tuned-lbfm2 |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-lbfm2 to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-5m72k |
Started |
Started container tuned | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-54zjc |
Created |
Created container csi-driver | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-t9gr2 | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-596f48f6bd-s4v8t |
Created |
Created container cluster-node-tuning-operator | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-596f48f6bd-s4v8t |
Started |
Started container cluster-node-tuning-operator | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-5m72k |
Created |
Created container tuned | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: restore-etcd-pod, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nEnvVarControllerDegraded: empty NodeStatuses, can't generate environment for getEscapedIPAddress" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nEnvVarControllerDegraded: empty NodeStatuses, can't generate environment for getEscapedIPAddress\nNodeControllerDegraded: All master nodes are ready" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-596f48f6bd-s4v8t_0b1e4920-40c0-461b-a738-3f6368147832 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-596f48f6bd-s4v8t_0b1e4920-40c0-461b-a738-3f6368147832 became leader | |
openshift-cluster-node-tuning-operator |
default-scheduler |
tuned-5m72k |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-5m72k to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-5m72k |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5a0b342d2946d03911c22f02d11d555d9c3650769380e160f0628ff97bd9f8" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Created |
Created container kube-rbac-proxy-8201 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Started |
Started container kube-rbac-proxy-8201 | |
openshift-cluster-csi-drivers |
daemonset-controller |
azure-file-csi-driver-node |
SuccessfulCreate |
Created pod: azure-file-csi-driver-node-tt4sz | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:418951fd0c8cc12783cc24b2f9c487b6bd277aee2cf182578bfca497a167063f" | |
openshift-cluster-csi-drivers |
default-scheduler |
azure-file-csi-driver-node-tt4sz |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-file-csi-driver-node-tt4sz to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-67gtn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:70fe518883175c417f736849278c0b614ba907ce768d4f069f9ff16bdcf4b2b7" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-b6dqs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:70fe518883175c417f736849278c0b614ba907ce768d4f069f9ff16bdcf4b2b7" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-tt4sz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-tt4sz |
Created |
Created container azure-inject-credentials | |
openshift-cluster-csi-drivers |
external-attacher-leader-disk.csi.azure.com/azure-disk-csi-driver-controller-6d9996db94-26g2j |
external-attacher-leader-disk-csi-azure-com |
LeaderElection |
azure-disk-csi-driver-controller-6d9996db94-26g2j became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-tt4sz |
Started |
Started container azure-inject-credentials | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-tt4sz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:70fe518883175c417f736849278c0b614ba907ce768d4f069f9ff16bdcf4b2b7" | |
openshift-cluster-csi-drivers |
daemonset-controller |
azure-file-csi-driver-node |
SuccessfulCreate |
Created pod: azure-file-csi-driver-node-67gtn | |
openshift-dns |
kubelet |
dns-default-kmxpr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db5c50d6151f584e498cd06f68ef6504fd0a35ff24943ecb50156062881d608e" | |
openshift-cluster-csi-drivers |
daemonset-controller |
azure-file-csi-driver-node |
SuccessfulCreate |
Created pod: azure-file-csi-driver-node-b6dqs | |
openshift-dns |
multus |
dns-default-kmxpr |
AddedInterface |
Add eth0 [10.129.0.43/23] from ovn-kubernetes | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-54zjc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" in 1.896s (1.896s including waiting) | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nNodeControllerDegraded: All master nodes are ready" to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 3" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-54zjc |
Created |
Created container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-54zjc |
Started |
Started container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:faee251eeaea85be146c2f8c0d3c1ab21611fc16e36f00b82906954bcaf30d26" in 2.422s (2.422s including waiting) | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 0 to 3 because node ci-op-9xx71rvq-1e28e-w667k-master-0 static pod not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionCreate |
Revision 3 created because required configmap/kube-scheduler-pod has changed | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Created |
Created container attacher-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-b6dqs |
Started |
Started container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-b6dqs |
Created |
Created container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-54zjc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" | |
openshift-dns |
kubelet |
dns-default-tfrnn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db5c50d6151f584e498cd06f68ef6504fd0a35ff24943ecb50156062881d608e" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-b6dqs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-cluster-csi-drivers |
default-scheduler |
azure-file-csi-driver-node-b6dqs |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-file-csi-driver-node-b6dqs to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-dns |
multus |
dns-default-9h9cc |
AddedInterface |
Add eth0 [10.130.0.16/23] from ovn-kubernetes | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Created |
Created container csi-attacher | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Started |
Started container csi-attacher | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nEnvVarControllerDegraded: empty NodeStatuses, can't generate environment for getEscapedIPAddress\nNodeControllerDegraded: All master nodes are ready" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeControllerDegraded: All master nodes are ready" | |
openshift-dns |
kubelet |
dns-default-9h9cc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db5c50d6151f584e498cd06f68ef6504fd0a35ff24943ecb50156062881d608e" | |
openshift-dns |
multus |
dns-default-tfrnn |
AddedInterface |
Add eth0 [10.128.0.17/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Started |
Started container attacher-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a3bd19b870f9551af296dce9d947bc273832d50ab86757035355993f59a347c" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-67gtn |
Started |
Started container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-67gtn |
Created |
Created container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-67gtn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-cluster-csi-drivers |
default-scheduler |
azure-file-csi-driver-node-67gtn |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-file-csi-driver-node-67gtn to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:418951fd0c8cc12783cc24b2f9c487b6bd277aee2cf182578bfca497a167063f" in 2.893s (2.893s including waiting) | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-1 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"etcd-pod-0\" not found" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Created |
Created container provisioner-kube-rbac-proxy | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:faee251eeaea85be146c2f8c0d3c1ab21611fc16e36f00b82906954bcaf30d26" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-client-ca-1 -n openshift-etcd because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Started |
Started container provisioner-kube-rbac-proxy | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-serving-ca-1 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-3-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-peer-client-ca-1 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"etcd-pod-0\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing | |
| (x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionCreateFailed |
Failed to create revision 1: configmap "etcd-pod" not found |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Created |
Created container csi-provisioner | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
| (x7) | openshift-route-controller-manager |
kubelet |
route-controller-manager-78b66d7c68-kqbr5 |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Started |
Started container csi-provisioner | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeControllerDegraded: All master nodes are ready" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeControllerDegraded: All master nodes are ready" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-54zjc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" in 4.496s (4.496s including waiting) | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-tt4sz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:70fe518883175c417f736849278c0b614ba907ce768d4f069f9ff16bdcf4b2b7" in 4.207s (4.207s including waiting) | |
openshift-dns |
kubelet |
dns-default-kmxpr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db5c50d6151f584e498cd06f68ef6504fd0a35ff24943ecb50156062881d608e" in 4.259s (4.259s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:70fe518883175c417f736849278c0b614ba907ce768d4f069f9ff16bdcf4b2b7" in 6.216s (6.216s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-tt4sz |
Created |
Created container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:faee251eeaea85be146c2f8c0d3c1ab21611fc16e36f00b82906954bcaf30d26" in 3.415s (3.415s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-tt4sz |
Created |
Created container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-54zjc |
Created |
Created container csi-liveness-probe | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: configmaps: client-ca",Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" to "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" | |
openshift-dns |
kubelet |
dns-default-kmxpr |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-kmxpr |
Created |
Created container kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreate |
Revision 1 created because configmap "kube-controller-manager-pod-0" not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Created |
Created container csi-attacher | |
openshift-dns |
kubelet |
dns-default-kmxpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-dns |
kubelet |
dns-default-kmxpr |
Started |
Started container dns | |
openshift-dns |
kubelet |
dns-default-kmxpr |
Created |
Created container dns | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-tt4sz |
Created |
Created container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-tt4sz |
Started |
Started container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-54zjc |
Started |
Started container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-tt4sz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-tt4sz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Created |
Created container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-tt4sz |
Started |
Started container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Started |
Started container csi-provisioner | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Created |
Created container csi-provisioner | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:418951fd0c8cc12783cc24b2f9c487b6bd277aee2cf182578bfca497a167063f" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Started |
Started container kube-rbac-proxy-8211 | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Started |
Started container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Created |
Created container kube-rbac-proxy-8211 | |
openshift-cluster-csi-drivers |
file.csi.azure.com/1718102938946-3774-file.csi.azure.com |
file-csi-azure-com |
LeaderElection |
1718102938946-3774-file-csi-azure-com became leader | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr |
Started |
Started container csi-attacher | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-67gtn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-tt4sz |
Started |
Started container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-67gtn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:70fe518883175c417f736849278c0b614ba907ce768d4f069f9ff16bdcf4b2b7" in 5.525s (5.525s including waiting) | |
openshift-cluster-csi-drivers |
external-resizer-disk-csi-azure-com/azure-disk-csi-driver-controller-6d9996db94-26g2j |
external-resizer-disk-csi-azure-com |
LeaderElection |
azure-disk-csi-driver-controller-6d9996db94-26g2j became leader | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-67gtn |
Created |
Created container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-67gtn |
Started |
Started container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Started |
Started container csi-attacher | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-67gtn |
Created |
Created container csi-liveness-probe | |
openshift-dns |
kubelet |
dns-default-9h9cc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db5c50d6151f584e498cd06f68ef6504fd0a35ff24943ecb50156062881d608e" in 5.809s (5.809s including waiting) | |
openshift-cluster-csi-drivers |
external-attacher-leader-file.csi.azure.com/azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
external-attacher-leader-file-csi-azure-com |
LeaderElection |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n became leader | |
openshift-dns |
kubelet |
dns-default-9h9cc |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-67gtn |
Created |
Created container csi-driver | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-t9gr2 |
Started |
Started container tuned | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Available message changed from "AzureDiskCSIDriverOperatorCRAvailable: AzureDiskDriverControllerServiceControllerAvailable: Waiting for Deployment\nAzureFileCSIDriverOperatorCRAvailable: AzureFileDriverControllerServiceControllerAvailable: Waiting for Deployment\nAzureFileCSIDriverOperatorCRAvailable: AzureFileDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service" to "AzureDiskCSIDriverOperatorCRAvailable: AzureDiskDriverControllerServiceControllerAvailable: Waiting for Deployment\nAzureFileCSIDriverOperatorCRAvailable: AzureFileDriverControllerServiceControllerAvailable: Waiting for Deployment" | |
openshift-dns |
kubelet |
dns-default-9h9cc |
Created |
Created container dns | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Started |
Started container provisioner-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:faee251eeaea85be146c2f8c0d3c1ab21611fc16e36f00b82906954bcaf30d26" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a3bd19b870f9551af296dce9d947bc273832d50ab86757035355993f59a347c" in 5.564s (5.564s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Created |
Created container csi-attacher | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Created |
Created container provisioner-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-67gtn |
Started |
Started container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-67gtn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeControllerDegraded: All master nodes are ready" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: restore-etcd-pod, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 3 nodes are at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0") | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Created |
Created container attacher-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Started |
Started container attacher-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a3bd19b870f9551af296dce9d947bc273832d50ab86757035355993f59a347c" | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-t9gr2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5a0b342d2946d03911c22f02d11d555d9c3650769380e160f0628ff97bd9f8" in 7.707s (7.707s including waiting) | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-t9gr2 |
Created |
Created container tuned | |
openshift-dns |
kubelet |
dns-default-9h9cc |
Started |
Started container dns | |
openshift-dns |
kubelet |
dns-default-9h9cc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-dns |
kubelet |
dns-default-9h9cc |
Created |
Created container kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing | |
openshift-kube-scheduler |
multus |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.18/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-lbfm2 |
Started |
Started container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-lbfm2 |
Created |
Created container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-lbfm2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5a0b342d2946d03911c22f02d11d555d9c3650769380e160f0628ff97bd9f8" in 9.072s (9.072s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-67gtn |
Started |
Started container csi-liveness-probe | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.serving-cert.secret | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-6c7c85d5db to 1 from 0 | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a3bd19b870f9551af296dce9d947bc273832d50ab86757035355993f59a347c" in 2.493s (2.493s including waiting) | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-78b66d7c68 to 2 from 3 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-78b66d7c68 |
SuccessfulDelete |
Deleted pod: route-controller-manager-78b66d7c68-kqbr5 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6c7c85d5db |
SuccessfulCreate |
Created pod: route-controller-manager-6c7c85d5db-pk6hs | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6c7c85d5db-pk6hs |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | |
openshift-dns |
kubelet |
dns-default-tfrnn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db5c50d6151f584e498cd06f68ef6504fd0a35ff24943ecb50156062881d608e" in 8.857s (8.857s including waiting) | |
openshift-dns |
kubelet |
dns-default-tfrnn |
Created |
Created container dns | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Available message changed from "AzureDiskCSIDriverOperatorCRAvailable: AzureDiskDriverControllerServiceControllerAvailable: Waiting for Deployment\nAzureFileCSIDriverOperatorCRAvailable: AzureFileDriverControllerServiceControllerAvailable: Waiting for Deployment" to "AzureDiskCSIDriverOperatorCRAvailable: AzureDiskDriverControllerServiceControllerAvailable: Waiting for Deployment" | |
openshift-dns |
kubelet |
dns-default-tfrnn |
Started |
Started container dns | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-dns |
kubelet |
dns-default-tfrnn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-b6dqs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-b6dqs |
Started |
Started container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-b6dqs |
Created |
Created container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-b6dqs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:70fe518883175c417f736849278c0b614ba907ce768d4f069f9ff16bdcf4b2b7" in 8.648s (8.648s including waiting) | |
openshift-cluster-csi-drivers |
deployment-controller |
azure-file-csi-driver-controller |
ScalingReplicaSet |
Scaled up replica set azure-file-csi-driver-controller-7bf87ccd87 to 2 from 1 | |
openshift-cluster-csi-drivers |
deployment-controller |
azure-file-csi-driver-controller |
ScalingReplicaSet |
Scaled down replica set azure-file-csi-driver-controller-5fdb6df78c to 0 from 1 | |
openshift-cluster-csi-drivers |
replicaset-controller |
azure-file-csi-driver-controller-7bf87ccd87 |
SuccessfulCreate |
Created pod: azure-file-csi-driver-controller-7bf87ccd87-xb66l | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Created |
Created container azure-inject-credentials | |
openshift-cluster-csi-drivers |
external-resizer-file-csi-azure-com/azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
external-resizer-file-csi-azure-com |
LeaderElection |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n became leader | |
openshift-cluster-csi-drivers |
default-scheduler |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-file-csi-driver-controller-7bf87ccd87-xb66l to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-cluster-csi-drivers |
replicaset-controller |
azure-file-csi-driver-controller-5fdb6df78c |
SuccessfulDelete |
Deleted pod: azure-file-csi-driver-controller-5fdb6df78c-dspvm | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Created |
Created container csi-resizer | |
openshift-cluster-csi-drivers |
external-snapshotter-leader-disk.csi.azure.com/azure-disk-csi-driver-controller-6d9996db94-26g2j |
external-snapshotter-leader-disk-csi-azure-com |
LeaderElection |
azure-disk-csi-driver-controller-6d9996db94-26g2j became leader | |
openshift-cluster-csi-drivers |
multus |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
AddedInterface |
Add eth0 [10.130.0.14/23] from ovn-kubernetes | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Started |
Started container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:70fe518883175c417f736849278c0b614ba907ce768d4f069f9ff16bdcf4b2b7" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Started |
Started container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Created |
Created container csi-provisioner | |
openshift-multus |
multus |
network-metrics-daemon-jttv4 |
AddedInterface |
Add eth0 [10.128.0.5/23] from ovn-kubernetes | |
openshift-multus |
multus |
network-metrics-daemon-tqqbv |
AddedInterface |
Add eth0 [10.129.0.6/23] from ovn-kubernetes | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Created |
Created container kube-rbac-proxy-8211 | |
openshift-cluster-csi-drivers |
replicaset-controller |
azure-disk-csi-driver-controller-6d9996db94 |
SuccessfulCreate |
Created pod: azure-disk-csi-driver-controller-6d9996db94-b8cs5 | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
multus |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
AddedInterface |
Add eth0 [10.128.0.19/23] from ovn-kubernetes | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Created |
Created container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Started |
Started container csi-attacher | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Started |
Started container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Created |
Created container csi-driver | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6c7c85d5db-pk6hs |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6c7c85d5db-pk6hs to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Available changed from False to True ("DefaultStorageClassControllerAvailable: StorageClass provided by supplied CSI Driver instead of the cluster-storage-operator\nAzureDiskCSIDriverOperatorCRAvailable: All is well\nAzureFileCSIDriverOperatorCRAvailable: All is well") | |
openshift-dns |
kubelet |
dns-default-tfrnn |
Created |
Created container kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Created |
Created container csi-attacher | |
openshift-dns |
kubelet |
dns-default-tfrnn |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:faee251eeaea85be146c2f8c0d3c1ab21611fc16e36f00b82906954bcaf30d26" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Started |
Started container provisioner-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Created |
Created container provisioner-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
default-scheduler |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/azure-disk-csi-driver-controller-6d9996db94-b8cs5 to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-cluster-csi-drivers |
multus |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
AddedInterface |
Add eth0 [10.128.0.21/23] from ovn-kubernetes | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Created |
Created container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-b6dqs |
Created |
Created container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-b6dqs |
Started |
Started container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-b6dqs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-b6dqs |
Created |
Created container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-b6dqs |
Started |
Started container csi-liveness-probe | |
openshift-cluster-csi-drivers |
deployment-controller |
azure-disk-csi-driver-controller |
ScalingReplicaSet |
Scaled up replica set azure-disk-csi-driver-controller-6d9996db94 to 2 from 1 | |
openshift-cluster-csi-drivers |
deployment-controller |
azure-disk-csi-driver-controller |
ScalingReplicaSet |
Scaled down replica set azure-disk-csi-driver-controller-79dc6dfd8f to 0 from 1 | |
openshift-cluster-csi-drivers |
replicaset-controller |
azure-disk-csi-driver-controller-79dc6dfd8f |
SuccessfulDelete |
Deleted pod: azure-disk-csi-driver-controller-79dc6dfd8f-tl6hr | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Started |
Started container kube-rbac-proxy-8211 | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:418951fd0c8cc12783cc24b2f9c487b6bd277aee2cf182578bfca497a167063f" already present on machine | |
openshift-multus |
multus |
network-metrics-daemon-bh74v |
AddedInterface |
Add eth0 [10.130.0.3/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-5fdb6df78c-dspvm |
Started |
Started container csi-provisioner | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Created |
Created container kube-rbac-proxy-8211 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager: cause by changes in data.pod.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:418951fd0c8cc12783cc24b2f9c487b6bd277aee2cf182578bfca497a167063f" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Started |
Started container kube-rbac-proxy-8211 | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Started |
Started container azure-inject-credentials | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" to "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Started |
Started container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Created |
Created container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:70fe518883175c417f736849278c0b614ba907ce768d4f069f9ff16bdcf4b2b7" already present on machine | |
openshift-cluster-version |
kubelet |
cluster-version-operator-6fff9b89f6-zgszm |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-9xx71rvq/release@sha256:65102daae8065dffb1c67481ff030f5ad71eab5a7335d2498348a84cb5189074" | |
openshift-marketplace |
multus |
marketplace-operator-867c6b6ccc-rmltl |
AddedInterface |
Add eth0 [10.129.0.21/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6475c74794-8hd5r |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6180e87936a3baf9a45604c8ebbb4e28f6e46725d5de227051ff63d3fa3d8d40" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-fffbcbd5b-hpsfj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed57c14910f2f4daaa3c9e0c04364d2989f9748cb634c53ee5903d54a0d5e737" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Started |
Started container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Created |
Created container csi-driver | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-fffbcbd5b-hpsfj |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Created |
Created container kube-rbac-proxy-8201 | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Started |
Started container kube-rbac-proxy-8201 | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-fffbcbd5b-hpsfj |
Created |
Created container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-fffbcbd5b-hpsfj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-fffbcbd5b-hpsfj |
AddedInterface |
Add eth0 [10.129.0.28/23] from ovn-kubernetes | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0 |
openshift-marketplace |
kubelet |
marketplace-operator-867c6b6ccc-rmltl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12b3eec8af6f44826bb42555d0363aa80e03b444efc93f28b44aee68bf6fb109" | |
openshift-machine-api |
kubelet |
machine-api-operator-6f847dd5f5-wqkzk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-operator-lifecycle-manager |
multus |
olm-operator-9958db496-pgws2 |
AddedInterface |
Add eth0 [10.129.0.12/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-9958db496-pgws2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-6d64fdfbc-xtlls |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-6d64fdfbc-xtlls |
Created |
Created container kube-rbac-proxy | |
openshift-machine-api |
multus |
machine-api-operator-6f847dd5f5-wqkzk |
AddedInterface |
Add eth0 [10.129.0.10/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
cluster-baremetal-operator-6475c74794-8hd5r |
AddedInterface |
Add eth0 [10.129.0.8/23] from ovn-kubernetes | |
| (x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-zpcvg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcd2eac2f4a4060a04319748ae6123c9e8fa96dfd8e16c530be345b3434cc6e9" | |
openshift-multus |
multus |
multus-admission-controller-6fc7977fb-zpcvg |
AddedInterface |
Add eth0 [10.129.0.27/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-6d64fdfbc-xtlls |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-machine-config-operator |
multus |
machine-config-operator-6d64fdfbc-xtlls |
AddedInterface |
Add eth0 [10.129.0.35/23] from ovn-kubernetes | |
openshift-ingress-operator |
multus |
ingress-operator-66bb9945d4-25hsj |
AddedInterface |
Add eth0 [10.129.0.9/23] from ovn-kubernetes | |
openshift-ingress-operator |
kubelet |
ingress-operator-66bb9945d4-25hsj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2086171405832d77db9abba287eaf6ec94d517ad8d8056a31b5b75dc2c421162" | |
openshift-kube-scheduler |
kubelet |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" in 4.58s (4.58s including waiting) | |
openshift-kube-scheduler |
kubelet |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-kube-scheduler |
kubelet |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-799db46f99-r6f42 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d07a4a7e6f001e827895fd370373d91a4912e737b9c7bd56ad9e3aa2bdcd6349" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:418951fd0c8cc12783cc24b2f9c487b6bd277aee2cf182578bfca497a167063f" | |
openshift-monitoring |
multus |
cluster-monitoring-operator-799db46f99-r6f42 |
AddedInterface |
Add eth0 [10.129.0.37/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
multus |
package-server-manager-7c88c666f8-r2wz4 |
AddedInterface |
Add eth0 [10.129.0.11/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7c88c666f8-r2wz4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-4v6xp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcd2eac2f4a4060a04319748ae6123c9e8fa96dfd8e16c530be345b3434cc6e9" | |
openshift-multus |
multus |
multus-admission-controller-6fc7977fb-4v6xp |
AddedInterface |
Add eth0 [10.129.0.38/23] from ovn-kubernetes | |
openshift-image-registry |
multus |
cluster-image-registry-operator-86c67755bb-2b7lz |
AddedInterface |
Add eth0 [10.129.0.13/23] from ovn-kubernetes | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b19f2d14cd886282f9e0307d8d6332af732ffab98ac5322a35a918121f2fad4" already present on machine | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-86c67755bb-2b7lz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de17441284be3fbe91e2df7e2d46a547a658a327201f9b51b58c70fe54f8378e" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-7f9c9cfdd9-6d8wg |
AddedInterface |
Add eth0 [10.129.0.17/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7f9c9cfdd9-6d8wg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc8c90f8ea1a38fe46d09b6351fa396eb6a398d3a72766c911662ae2644abcab" | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7c88c666f8-r2wz4 |
Created |
Created container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-6f847dd5f5-wqkzk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c17343cfe2ce58f3278203ef9398d3472a313ca67702d107b482007f812bc4a7" | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7c88c666f8-r2wz4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/master-user-data-managed -n openshift-machine-api because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7c88c666f8-r2wz4 |
Started |
Started container kube-rbac-proxy | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: restore-etcd-pod, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0]" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: restore-etcd-pod, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0]\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" | |
openshift-machine-api |
kubelet |
machine-api-operator-6f847dd5f5-wqkzk |
Created |
Created container kube-rbac-proxy | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config-operator started a version change from [] to [{operator 4.16.0-0.nightly-2024-06-10-211334}] | |
openshift-machine-api |
kubelet |
machine-api-operator-6f847dd5f5-wqkzk |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-lifecycle-manager |
multus |
catalog-operator-9d764bfb9-w5dr5 |
AddedInterface |
Add eth0 [10.129.0.30/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-9d764bfb9-w5dr5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-version |
kubelet |
cluster-version-operator-6fff9b89f6-zgszm |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-9xx71rvq/release@sha256:65102daae8065dffb1c67481ff030f5ad71eab5a7335d2498348a84cb5189074" in 3.037s (3.037s including waiting) | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-7kvwj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06cb5faab03003ec68dedbb23fbbdef0c98eb80ba70affedb7703df613ca31ac" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: restore-etcd-pod, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0]\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: restore-etcd-pod, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0]\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-7kvwj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-machine-config-operator |
default-scheduler |
machine-config-daemon-f5p8t |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-f5p8t to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-7kvwj |
Started |
Started container machine-config-daemon | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-7kvwj |
Created |
Created container machine-config-daemon | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-7kvwj | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-spqnd | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-f5p8t | |
openshift-machine-config-operator |
default-scheduler |
machine-config-daemon-spqnd |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-spqnd to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-machine-config-operator |
default-scheduler |
machine-config-daemon-7kvwj |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-7kvwj to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: restore-etcd-pod, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0]\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreate |
Revision 2 created because required configmap/config has changed,required configmap/cluster-policy-controller-config has changed | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-7kvwj |
Started |
Started container kube-rbac-proxy | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 0 to 1 because node ci-op-9xx71rvq-1e28e-w667k-master-0 static pod not found | |
openshift-machine-config-operator |
machine-config-operator |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-7kvwj |
Created |
Created container kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/kube-controller-manager-pod has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Upgradeable message changed from "All is well" to "KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced." | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-1-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Created |
Created container csi-provisioner | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Started |
Started container provisioner-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:faee251eeaea85be146c2f8c0d3c1ab21611fc16e36f00b82906954bcaf30d26" | |
openshift-etcd |
multus |
installer-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.22/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Started |
Started container csi-provisioner | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Created |
Created container csi-provisioner | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:418951fd0c8cc12783cc24b2f9c487b6bd277aee2cf182578bfca497a167063f" in 8.522s (8.522s including waiting) | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-f5p8t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Created |
Created container provisioner-kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-f5p8t |
Started |
Started container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-f5p8t |
Created |
Created container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-f5p8t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06cb5faab03003ec68dedbb23fbbdef0c98eb80ba70affedb7703df613ca31ac" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-f5p8t |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:418951fd0c8cc12783cc24b2f9c487b6bd277aee2cf182578bfca497a167063f" in 7.5s (7.5s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Started |
Started container csi-provisioner | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-f5p8t |
Created |
Created container kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Created |
Created container provisioner-kube-rbac-proxy | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-machine-config-operator |
machine-config-operator |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Started |
Started container provisioner-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:faee251eeaea85be146c2f8c0d3c1ab21611fc16e36f00b82906954bcaf30d26" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on authentications.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
| (x3) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] |
| (x3) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTemplates |
templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"] |
| (x3) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Created |
Created container attacher-kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd |
kubelet |
installer-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" in 3.886s (3.886s including waiting) | |
openshift-etcd |
kubelet |
installer-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Started |
Started container attacher-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Created |
Created container csi-attacher | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:faee251eeaea85be146c2f8c0d3c1ab21611fc16e36f00b82906954bcaf30d26" in 3.543s (3.543s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Started |
Started container csi-attacher | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Started |
Started container attacher-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Created |
Created container csi-attacher | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a3bd19b870f9551af296dce9d947bc273832d50ab86757035355993f59a347c" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Started |
Started container csi-attacher | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:faee251eeaea85be146c2f8c0d3c1ab21611fc16e36f00b82906954bcaf30d26" in 2.549s (2.549s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Created |
Created container attacher-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a3bd19b870f9551af296dce9d947bc273832d50ab86757035355993f59a347c" | |
openshift-etcd |
kubelet |
installer-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/kube-controller-manager-pod has changed" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a3bd19b870f9551af296dce9d947bc273832d50ab86757035355993f59a347c" in 2.6s (2.6s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a3bd19b870f9551af296dce9d947bc273832d50ab86757035355993f59a347c" in 2.56s (2.56s including waiting) | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 3" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreate |
Revision 3 created because required configmap/kube-controller-manager-pod has changed | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAzureFileCSIDriverOperatorCRProgressing: AzureFileDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods" to "AzureDiskCSIDriverOperatorCRProgressing: AzureDiskDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-spqnd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06cb5faab03003ec68dedbb23fbbdef0c98eb80ba70affedb7703df613ca31ac" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-9958db496-pgws2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" in 16.67s (16.67s including waiting) | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-9d764bfb9-w5dr5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" in 16.194s (16.194s including waiting) | |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-4v6xp |
Created |
Created container multus-admission-controller | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-9d764bfb9-w5dr5 |
Created |
Created container catalog-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-spqnd |
Created |
Created container machine-config-daemon | |
openshift-marketplace |
kubelet |
marketplace-operator-867c6b6ccc-rmltl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12b3eec8af6f44826bb42555d0363aa80e03b444efc93f28b44aee68bf6fb109" in 16.506s (16.506s including waiting) | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-spqnd |
Started |
Started container machine-config-daemon | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7c88c666f8-r2wz4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" in 16.063s (16.063s including waiting) | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7c88c666f8-r2wz4 |
Created |
Created container package-server-manager | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-7c88c666f8-r2wz4 |
Started |
Started container package-server-manager | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-799db46f99-r6f42 |
Started |
Started container cluster-monitoring-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-799db46f99-r6f42 |
Created |
Created container cluster-monitoring-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-spqnd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-799db46f99-r6f42 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d07a4a7e6f001e827895fd370373d91a4912e737b9c7bd56ad9e3aa2bdcd6349" in 16.352s (16.352s including waiting) | |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-4v6xp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-86c67755bb-2b7lz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de17441284be3fbe91e2df7e2d46a547a658a327201f9b51b58c70fe54f8378e" in 16.859s (16.859s including waiting) | |
openshift-machine-api |
kubelet |
machine-api-operator-6f847dd5f5-wqkzk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c17343cfe2ce58f3278203ef9398d3472a313ca67702d107b482007f812bc4a7" in 16.099s (16.099s including waiting) | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7f9c9cfdd9-6d8wg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc8c90f8ea1a38fe46d09b6351fa396eb6a398d3a72766c911662ae2644abcab" in 16.094s (16.094s including waiting) | |
openshift-machine-api |
cluster-autoscaler-operator-fffbcbd5b-hpsfj_1eb2a690-34db-4593-8b8d-029f9a491f37 |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-fffbcbd5b-hpsfj_1eb2a690-34db-4593-8b8d-029f9a491f37 became leader | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-spqnd |
Created |
Created container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-zpcvg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-fffbcbd5b-hpsfj |
Started |
Started container cluster-autoscaler-operator | |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-zpcvg |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-zpcvg |
Created |
Created container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-zpcvg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcd2eac2f4a4060a04319748ae6123c9e8fa96dfd8e16c530be345b3434cc6e9" in 16.487s (16.487s including waiting) | |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-4v6xp |
Started |
Started container multus-admission-controller | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-fffbcbd5b-hpsfj |
Created |
Created container cluster-autoscaler-operator | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6475c74794-8hd5r |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6180e87936a3baf9a45604c8ebbb4e28f6e46725d5de227051ff63d3fa3d8d40" in 16.506s (16.506s including waiting) | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6475c74794-8hd5r |
Created |
Created container cluster-baremetal-operator | |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-4v6xp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcd2eac2f4a4060a04319748ae6123c9e8fa96dfd8e16c530be345b3434cc6e9" in 16.473s (16.473s including waiting) | |
openshift-ingress-operator |
kubelet |
ingress-operator-66bb9945d4-25hsj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2086171405832d77db9abba287eaf6ec94d517ad8d8056a31b5b75dc2c421162" in 16.441s (16.441s including waiting) | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-fffbcbd5b-hpsfj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed57c14910f2f4daaa3c9e0c04364d2989f9748cb634c53ee5903d54a0d5e737" in 16.234s (16.234s including waiting) | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-spqnd |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-dhl9x" is created for OpenShiftMonitoringClientCertRequester | |
openshift-machine-api |
replicaset-controller |
machine-api-controllers-857c68d88f |
SuccessfulCreate |
Created pod: machine-api-controllers-857c68d88f-cpdp9 | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-machine-api |
deployment-controller |
machine-api-controllers |
ScalingReplicaSet |
Scaled up replica set machine-api-controllers-857c68d88f to 1 | |
openshift-machine-api |
cluster-baremetal-operator-6475c74794-8hd5r_03f14f61-c079-4518-9861-289a722a2149 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-6475c74794-8hd5r_03f14f61-c079-4518-9861-289a722a2149 became leader | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
default-scheduler |
machine-api-controllers-857c68d88f-cpdp9 |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-controllers-857c68d88f-cpdp9 to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
| (x23) | openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageFailed |
configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-dhl9x" has been approved | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6475c74794-8hd5r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-mndv5" is created for OpenShiftMonitoringTelemeterClientCertRequester | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-9958db496-pgws2 |
Started |
Started container olm-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-6f847dd5f5-wqkzk |
Created |
Created container machine-api-operator | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-566b55489f-2ktqr |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-566b55489f-wzvmv |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-566b55489f |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-566b55489f-wzvmv | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-566b55489f |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-566b55489f-2ktqr | |
| (x2) | openshift-monitoring |
controllermanager |
prometheus-operator-admission-webhook |
NoPods |
No matching pods found |
openshift-monitoring |
deployment-controller |
prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-admission-webhook-566b55489f to 2 | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-mndv5" has been approved | |
openshift-ingress-operator |
kubelet |
ingress-operator-66bb9945d4-25hsj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-9d764bfb9-w5dr5 |
Started |
Started container catalog-operator | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-9958db496-pgws2 |
Created |
Created container olm-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7f9c9cfdd9-6d8wg |
Started |
Started container control-plane-machine-set-operator | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-86c67755bb-2b7lz_808a29d0-fa2a-484c-bd4b-3c998a0721d5 became leader | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6475c74794-8hd5r |
Started |
Started container cluster-baremetal-operator | |
openshift-operator-lifecycle-manager |
package-server-manager-7c88c666f8-r2wz4_884d5202-5192-4563-90b2-4e9244d7bf8f |
packageserver-controller-lock |
LeaderElection |
package-server-manager-7c88c666f8-r2wz4_884d5202-5192-4563-90b2-4e9244d7bf8f became leader | |
openshift-machine-api |
kubelet |
machine-api-operator-6f847dd5f5-wqkzk |
Started |
Started container machine-api-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7f9c9cfdd9-6d8wg |
Created |
Created container control-plane-machine-set-operator | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreateFailed |
Failed to create revision 1: configmap "kube-apiserver-pod" not found |
openshift-machine-api |
control-plane-machine-set-operator-7f9c9cfdd9-6d8wg_a5fdbea8-22cf-44ee-bf46-137ff0382da7 |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-7f9c9cfdd9-6d8wg_a5fdbea8-22cf-44ee-bf46-137ff0382da7 became leader | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-86c67755bb-2b7lz |
Started |
Started container cluster-image-registry-operator | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-86c67755bb-2b7lz |
Created |
Created container cluster-image-registry-operator | |
openshift-marketplace |
multus |
redhat-operators-ddg4k |
AddedInterface |
Add eth0 [10.130.0.20/23] from ovn-kubernetes | |
openshift-ingress-operator |
certificate_controller |
router-ca |
CreatedWildcardCACert |
Created a default wildcard CA certificate | |
openshift-marketplace |
default-scheduler |
redhat-operators-ddg4k |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-ddg4k to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-marketplace |
kubelet |
redhat-operators-ddg4k |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" | |
openshift-ingress-operator |
ingress_controller |
default |
Admitted |
ingresscontroller passed validation | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
RequirementsUnknown |
requirements not yet checked | |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-4v6xp |
Created |
Created container kube-rbac-proxy | |
openshift-ingress |
default-scheduler |
router-default-7c66d9f4d8-hjjcl |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-ingress-operator |
kubelet |
ingress-operator-66bb9945d4-25hsj |
Created |
Created container kube-rbac-proxy | |
openshift-ingress |
default-scheduler |
router-default-7c66d9f4d8-wk77v |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-4v6xp |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-zpcvg |
Created |
Created container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-zpcvg |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6475c74794-8hd5r |
Created |
Created container baremetal-kube-rbac-proxy | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-ingress namespace | |
openshift-marketplace |
default-scheduler |
certified-operators-24rdr |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-24rdr to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-ingress |
replicaset-controller |
router-default-7c66d9f4d8 |
SuccessfulCreate |
Created pod: router-default-7c66d9f4d8-wk77v | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6475c74794-8hd5r |
Started |
Started container baremetal-kube-rbac-proxy | |
openshift-ingress |
replicaset-controller |
router-default-7c66d9f4d8 |
SuccessfulCreate |
Created pod: router-default-7c66d9f4d8-hjjcl | |
openshift-marketplace |
multus |
certified-operators-24rdr |
AddedInterface |
Add eth0 [10.130.0.19/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-24rdr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-ingress |
deployment-controller |
router-default |
ScalingReplicaSet |
Scaled up replica set router-default-7c66d9f4d8 to 2 | |
openshift-machine-api |
multus |
machine-api-controllers-857c68d88f-cpdp9 |
AddedInterface |
Add eth0 [10.130.0.18/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c17343cfe2ce58f3278203ef9398d3472a313ca67702d107b482007f812bc4a7" | |
openshift-ingress |
service-controller |
router-default |
EnsuringLoadBalancer |
Ensuring load balancer | |
openshift-ingress-operator |
certificate_controller |
default |
CreatedDefaultCertificate |
Created default wildcard certificate "router-certs-default" | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
AllRequirementsMet |
all requirements found, attempting install | |
openshift-machine-config-operator |
machine-config-operator |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-config-managed |
certificate_publisher_controller |
default-ingress-cert |
PublishedRouterCA |
Published "default-ingress-cert" in "openshift-config-managed" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing | |
| (x3) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\"oauthConfig\": map[string]any{\n+ \t\t\t\"assetPublicURL\": string(\"\"),\n+ \t\t\t\"loginURL\": string(\"https://api.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.c\"...),\n+ \t\t\t\"templates\": map[string]any{\n+ \t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t},\n+ \t\t\t\"tokenConfig\": map[string]any{\n+ \t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+ \t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+ \t\t\t},\n+ \t\t},\n+ \t\t\"serverArguments\": map[string]any{\n+ \t\t\t\"audit-log-format\": []any{string(\"json\")},\n+ \t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+ \t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+ \t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+ \t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+ \t\t},\n+ \t\t\"servingInfo\": map[string]any{\n+ \t\t\t\"cipherSuites\": []any{\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...),\n+ \t\t\t\tstring(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_S\"...),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM\"...),\n+ \t\t\t\tstring(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_S\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+ \t},\n )\n" |
openshift-config-managed |
certificate_publisher_controller |
router-certs |
PublishedRouterCertificates |
Published router certificates | |
openshift-machine-config-operator |
default-scheduler |
machine-config-controller-658949885-vhtct |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-controller-658949885-vhtct to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-ingress-operator |
kubelet |
ingress-operator-66bb9945d4-25hsj |
Started |
Started container kube-rbac-proxy | |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
waiting for install components to report healthy |
openshift-operator-lifecycle-manager |
deployment-controller |
packageserver |
ScalingReplicaSet |
Scaled up replica set packageserver-687cc5c899 to 2 | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing changed from True to False ("AzureDiskCSIDriverOperatorCRProgressing: All is well\nAzureFileCSIDriverOperatorCRProgressing: All is well") | |
| (x3) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIServerURL |
loginURL changed from to https://api.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com:6443 |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x3) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAuditProfile |
AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]' |
openshift-operator-lifecycle-manager |
replicaset-controller |
packageserver-687cc5c899 |
SuccessfulCreate |
Created pod: packageserver-687cc5c899-cclnt | |
openshift-marketplace |
default-scheduler |
community-operators-8x76m |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-8x76m to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-operator-lifecycle-manager |
replicaset-controller |
packageserver-687cc5c899 |
SuccessfulCreate |
Created pod: packageserver-687cc5c899-628ps | |
| (x3) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTokenConfig |
accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400) |
openshift-machine-config-operator |
replicaset-controller |
machine-config-controller-658949885 |
SuccessfulCreate |
Created pod: machine-config-controller-658949885-vhtct | |
openshift-machine-config-operator |
deployment-controller |
machine-config-controller |
ScalingReplicaSet |
Scaled up replica set machine-config-controller-658949885 to 1 | |
openshift-operator-lifecycle-manager |
default-scheduler |
packageserver-687cc5c899-628ps |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/packageserver-687cc5c899-628ps to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing | |
openshift-operator-lifecycle-manager |
default-scheduler |
packageserver-687cc5c899-cclnt |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/packageserver-687cc5c899-cclnt to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-658949885-vhtct |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-658949885-vhtct |
Created |
Created container machine-config-controller | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-687cc5c899-628ps |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-687cc5c899-cclnt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-658949885-vhtct |
Started |
Started container machine-config-controller | |
openshift-marketplace |
default-scheduler |
redhat-marketplace-pnlz7 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-pnlz7 to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-658949885-vhtct |
Started |
Started container kube-rbac-proxy | |
openshift-marketplace |
multus |
redhat-marketplace-pnlz7 |
AddedInterface |
Add eth0 [10.130.0.24/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-pnlz7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" | |
openshift-marketplace |
multus |
community-operators-8x76m |
AddedInterface |
Add eth0 [10.130.0.22/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-8x76m |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-658949885-vhtct |
Created |
Created container kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
multus |
packageserver-687cc5c899-628ps |
AddedInterface |
Add eth0 [10.130.0.21/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machine-config-operator |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
multus |
machine-config-controller-658949885-vhtct |
AddedInterface |
Add eth0 [10.130.0.23/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-658949885-vhtct |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06cb5faab03003ec68dedbb23fbbdef0c98eb80ba70affedb7703df613ca31ac" already present on machine | |
openshift-operator-lifecycle-manager |
multus |
packageserver-687cc5c899-cclnt |
AddedInterface |
Add eth0 [10.128.0.23/23] from ovn-kubernetes | |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallWaiting |
apiServices not installed |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing | |
| (x24) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6cf975b6c8-zdsgh |
BackOff |
Back-off restarting failed container kube-rbac-proxy in pod cluster-cloud-controller-manager-operator-6cf975b6c8-zdsgh_openshift-cloud-controller-manager-operator(f8efb5a9-7af4-4683-b2a4-a4caa7e8ae02) |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-machine-config-operator |
default-scheduler |
machine-config-server-lh4sp |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-server-lh4sp to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-server |
SuccessfulCreate |
Created pod: machine-config-server-qvxpf | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-server |
SuccessfulCreate |
Created pod: machine-config-server-x25mx | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-server |
SuccessfulCreate |
Created pod: machine-config-server-lh4sp | |
openshift-machine-config-operator |
kubelet |
machine-config-server-x25mx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06cb5faab03003ec68dedbb23fbbdef0c98eb80ba70affedb7703df613ca31ac" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-server-lh4sp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06cb5faab03003ec68dedbb23fbbdef0c98eb80ba70affedb7703df613ca31ac" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " | |
openshift-authentication-operator |
cluster-authentication-operator-routercertsdomainvalidationcontroller |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveRouterSecret |
namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com", "names":[]interface {}{"*.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com"}}} | |
openshift-machine-config-operator |
default-scheduler |
machine-config-server-qvxpf |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-server-qvxpf to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.c\"...), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\n \t\t\"cipherSuites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...},\n \t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t\"namedCertificates\": []any{\n+ \t\t\tmap[string]any{\n+ \t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"names\": []any{string(\"*.apps.ci-op-9xx71rvq-1e28e.qe.a\"...)},\n+ \t\t\t},\n+ \t\t},\n \t},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n" | |
openshift-machine-config-operator |
kubelet |
machine-config-server-qvxpf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06cb5faab03003ec68dedbb23fbbdef0c98eb80ba70affedb7703df613ca31ac" already present on machine | |
openshift-machine-config-operator |
default-scheduler |
machine-config-server-x25mx |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-server-x25mx to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-machine-config-operator |
kubelet |
machine-config-server-qvxpf |
Started |
Started container machine-config-server | |
openshift-machine-config-operator |
kubelet |
machine-config-server-qvxpf |
Created |
Created container machine-config-server | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-server-lh4sp |
Created |
Created container machine-config-server | |
openshift-machine-config-operator |
kubelet |
machine-config-server-lh4sp |
Started |
Started container machine-config-server | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
worker |
RenderedConfigGenerated |
rendered-worker-f4cc71d726c1dfbaa9a15a8e0d1198a8 successfully generated (release version: 4.16.0-0.nightly-2024-06-10-211334, controller version: 53f3e1eef97a3e1c2cae0b3cbcae3e10f9228d8d) | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-3836bd588b1cc1c96287a7d6aef1e84e successfully generated (release version: 4.16.0-0.nightly-2024-06-10-211334, controller version: 53f3e1eef97a3e1c2cae0b3cbcae3e10f9228d8d) | |
openshift-network-operator |
kubelet |
network-operator-7cbf958795-pszp8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:74a88136c1f22a00a7ffee265c05f3e0101ba89a3b297e2027fcc9d53230b6a1" already present on machine | |
| (x23) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageFailed |
configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found |
openshift-machine-config-operator |
kubelet |
machine-config-server-x25mx |
Created |
Created container machine-config-server | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-trust-distribution-trustdistributioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionCreate |
Revision 4 created because required configmap/serviceaccount-ca has changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-server-x25mx |
Started |
Started container machine-config-server | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: RequiredPoolsFailed |
Unable to apply 4.16.0-0.nightly-2024-06-10-211334: error during syncRequiredMachineConfigPools: context deadline exceeded | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 4" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container installer | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing | |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-67976f8796-p7shh |
Created |
Created container etcd-operator |
openshift-marketplace |
kubelet |
redhat-operators-ddg4k |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-24rdr |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-ddg4k |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-pnlz7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" in 9.794s (9.794s including waiting) | |
openshift-marketplace |
kubelet |
redhat-operators-ddg4k |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" in 11.653s (11.653s including waiting) | |
| (x10) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
| (x2) | openshift-network-operator |
kubelet |
network-operator-7cbf958795-pszp8 |
Started |
Started container network-operator |
| (x2) | openshift-network-operator |
kubelet |
network-operator-7cbf958795-pszp8 |
Created |
Created container network-operator |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f96580d79cef3954a20bcbe62a91f0cafbb3d90ece402e9dc77f02bd013c9bd1" | |
openshift-marketplace |
kubelet |
certified-operators-24rdr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" in 11.783s (11.784s including waiting) | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Started |
Started container machineset-controller | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Created |
Created container machineset-controller | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c17343cfe2ce58f3278203ef9398d3472a313ca67702d107b482007f812bc4a7" in 12.397s (12.397s including waiting) | |
openshift-marketplace |
kubelet |
certified-operators-24rdr |
Started |
Started container extract-utilities | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-74bf5c6c66-mlzgt_7ddd760c-128e-4dd8-b9cc-da842f894b12 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-4-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-machine-api |
machine-api-controllers-857c68d88f-cpdp9_c7c91dff-149a-46d0-9359-01ded0f62f74 |
cluster-api-provider-machineset-leader |
LeaderElection |
machine-api-controllers-857c68d88f-cpdp9_c7c91dff-149a-46d0-9359-01ded0f62f74 became leader | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-687cc5c899-628ps |
Started |
Started container packageserver | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-687cc5c899-628ps |
Created |
Created container packageserver | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing | |
| (x89) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMissing |
no observedConfig |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-687cc5c899-628ps |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" in 10.444s (10.444s including waiting) | |
openshift-marketplace |
kubelet |
community-operators-8x76m |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" in 10.389s (10.389s including waiting) | |
openshift-etcd-operator |
kubelet |
etcd-operator-67976f8796-p7shh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-67976f8796-p7shh |
Started |
Started container etcd-operator |
| (x8) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: client-ca |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-marketplace |
kubelet |
redhat-marketplace-pnlz7 |
Created |
Created container extract-utilities | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-2_d29112a4-8163-458b-a05b-79aaceead2c4 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
EtcdMembersErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreate |
Revision 4 created because required configmap/serviceaccount-ca has changed | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-marketplace |
kubelet |
redhat-operators-ddg4k |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.16" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 4" | |
openshift-marketplace |
kubelet |
community-operators-8x76m |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-8x76m |
Created |
Created container extract-utilities | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersDegraded: No unhealthy members found" | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-67976f8796-p7shh_c6a8cd57-3785-47ae-945b-5cdbb5ad4ec7 became leader | |
openshift-marketplace |
kubelet |
certified-operators-24rdr |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.16" | |
openshift-marketplace |
kubelet |
redhat-marketplace-pnlz7 |
Started |
Started container extract-utilities | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AdminNetworkPolicy=true,AlibabaPlatform=true,AutomatedEtcdBackup=false,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,CSIDriverSharedResource=false,ChunkSizeMiB=false,CloudDualStackNodeIPs=true,ClusterAPIInstall=false,ClusterAPIInstallAWS=true,ClusterAPIInstallAzure=false,ClusterAPIInstallGCP=false,ClusterAPIInstallIBMCloud=false,ClusterAPIInstallNutanix=true,ClusterAPIInstallOpenStack=true,ClusterAPIInstallPowerVS=false,ClusterAPIInstallVSphere=true,DNSNameResolver=false,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalCloudProvider=true,ExternalCloudProviderAzure=true,ExternalCloudProviderExternal=true,ExternalCloudProviderGCP=true,ExternalOIDC=false,ExternalRouteCertificate=false,GCPClusterHostedDNS=false,GCPLabelsTags=false,GatewayAPI=false,HardwareSpeed=true,ImagePolicy=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InstallAlternateInfrastructureAWS=false,KMSv1=true,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImages=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MetricsServer=true,MixedCPUsAllocation=false,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NewOLM=false,NodeDisruptionPolicy=false,NodeSwap=false,OnClusterBuild=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,PrivateHostedZoneAWS=true,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=false,VSphereStaticIPs=true,ValidatingAdmissionPolicy=false,VolumeGroupSnapshot=false |
openshift-marketplace |
kubelet |
redhat-marketplace-pnlz7 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.16" | |
openshift-marketplace |
kubelet |
community-operators-8x76m |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.16" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" | |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-648fdc585-xghvk |
Created |
Created container kube-apiserver-operator |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-648fdc585-xghvk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" | |
| (x32) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 10.0.0.5 |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-648fdc585-xghvk |
Started |
Started container kube-apiserver-operator |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-5799f4fc64-s48zf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:191ff3bb0eed21729ce43c31634050ee410b4db69b64664701cf399f747d150c" already present on machine | |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-5799f4fc64-s48zf |
Created |
Created container openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
FastControllerResync |
Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-5799f4fc64-s48zf_7aecd53a-4bce-47a5-a564-328242297acf became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-648fdc585-xghvk_0f560b16-7d46-4495-af60-7cf077d9a197 became leader | |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-5799f4fc64-s48zf |
Started |
Started container openshift-apiserver-operator |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-c8bf8fc99-cjm9q |
Started |
Started container service-ca-operator |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-c8bf8fc99-cjm9q |
Created |
Created container service-ca-operator |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-c8bf8fc99-cjm9q_1f27d7e2-b8ec-4601-80d8-5304763cf3b6 became leader | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-c8bf8fc99-cjm9q |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f80df79d4e101968318c99f4f8bf6afc7c3729d2c1bf8eaf1fe3894bf8ff066" already present on machine | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f96580d79cef3954a20bcbe62a91f0cafbb3d90ece402e9dc77f02bd013c9bd1" in 6.058s (6.058s including waiting) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 | |
openshift-machine-api |
machine-api-controllers-857c68d88f-cpdp9_4a336a87-eec4-45d8-a678-46328fa84d3a |
cluster-api-provider-azure-leader |
LeaderElection |
machine-api-controllers-857c68d88f-cpdp9_4a336a87-eec4-45d8-a678-46328fa84d3a became leader | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Started |
Started container nodelink-controller | |
openshift-machine-api |
machine-api-controllers-857c68d88f-cpdp9_9194f988-32ad-4848-a295-5fd75da6e6c6 |
cluster-api-provider-nodelink-leader |
LeaderElection |
machine-api-controllers-857c68d88f-cpdp9_9194f988-32ad-4848-a295-5fd75da6e6c6 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-687cc5c899-cclnt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" in 17.883s (17.883s including waiting) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AdminNetworkPolicy=true,AlibabaPlatform=true,AutomatedEtcdBackup=false,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,CSIDriverSharedResource=false,ChunkSizeMiB=false,CloudDualStackNodeIPs=true,ClusterAPIInstall=false,ClusterAPIInstallAWS=true,ClusterAPIInstallAzure=false,ClusterAPIInstallGCP=false,ClusterAPIInstallIBMCloud=false,ClusterAPIInstallNutanix=true,ClusterAPIInstallOpenStack=true,ClusterAPIInstallPowerVS=false,ClusterAPIInstallVSphere=true,DNSNameResolver=false,DisableKubeletCloudCredentialProviders=true,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalCloudProvider=true,ExternalCloudProviderAzure=true,ExternalCloudProviderExternal=true,ExternalCloudProviderGCP=true,ExternalOIDC=false,ExternalRouteCertificate=false,GCPClusterHostedDNS=false,GCPLabelsTags=false,GatewayAPI=false,HardwareSpeed=true,ImagePolicy=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InstallAlternateInfrastructureAWS=false,KMSv1=true,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImages=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MetricsServer=true,MixedCPUsAllocation=false,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NewOLM=false,NodeDisruptionPolicy=false,NodeSwap=false,OnClusterBuild=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,PrivateHostedZoneAWS=true,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,ServiceAccountTokenNodeBindingValidation=false,ServiceAccountTokenPodNodeInfo=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=false,VSphereStaticIPs=true,ValidatingAdmissionPolicy=false,VolumeGroupSnapshot=false | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-machine-api |
machine-api-provider-azure |
machine-api-controllers |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c17343cfe2ce58f3278203ef9398d3472a313ca67702d107b482007f812bc4a7" already present on machine | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Created |
Created container nodelink-controller | |
openshift-kube-scheduler |
kubelet |
installer-4-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-687cc5c899-cclnt |
Created |
Created container packageserver | |
openshift-kube-scheduler |
kubelet |
installer-4-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7759655b55-g5bc2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c17343cfe2ce58f3278203ef9398d3472a313ca67702d107b482007f812bc4a7" already present on machine | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Started |
Started container machine-healthcheck-controller | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Created |
Created container machine-healthcheck-controller | |
openshift-machine-api |
machine-api-controllers-857c68d88f-cpdp9_97692af3-da24-495a-92e9-f67702b961de |
cluster-api-provider-healthcheck-leader |
LeaderElection |
machine-api-controllers-857c68d88f-cpdp9_97692af3-da24-495a-92e9-f67702b961de became leader | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-687cc5c899-cclnt |
Started |
Started container packageserver | |
openshift-kube-scheduler |
kubelet |
installer-4-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-kube-scheduler |
multus |
installer-4-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.24/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-699c988f9d-nkb7r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-machine-api |
azure-controller |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
Updated |
Updated machine "ci-op-9xx71rvq-1e28e-w667k-master-2" | |
| (x2) | openshift-machine-api |
azure-controller |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
Updated |
Updated machine "ci-op-9xx71rvq-1e28e-w667k-master-1" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-699c988f9d-nkb7r_120e1ddc-8a3e-4c28-8fcd-e627c865164f became leader | |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-699c988f9d-nkb7r |
Created |
Created container kube-controller-manager-operator |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-699c988f9d-nkb7r |
Started |
Started container kube-controller-manager-operator |
| (x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7759655b55-g5bc2 |
Created |
Created container kube-scheduler-operator-container |
| (x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7759655b55-g5bc2 |
Started |
Started container kube-scheduler-operator-container |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-4 |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-4,kube-scheduler-cert-syncer-kubeconfig-4,kube-scheduler-pod-4,scheduler-kubeconfig-4,serviceaccount-ca-4, secrets: localhost-recovery-client-token-4]\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-4]\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-7759655b55-g5bc2_7cd660fe-0473-462d-a29b-14684d9ced17 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-4,kube-scheduler-cert-syncer-kubeconfig-4,kube-scheduler-pod-4,scheduler-kubeconfig-4,serviceaccount-ca-4, secrets: localhost-recovery-client-token-4]\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
| (x5) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
secrets: kube-scheduler-client-cert-key, configmaps: config-4,kube-scheduler-cert-syncer-kubeconfig-4,kube-scheduler-pod-4,scheduler-kubeconfig-4,serviceaccount-ca-4, secrets: localhost-recovery-client-token-4 |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-5b9b5c7f89-z28dx_5d59397a-de5c-4fe2-b932-9caecfefe3fe became leader | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-5c8c884d57 to 1 | |
openshift-multus |
default-scheduler |
multus-admission-controller-5c8c884d57-hpscs |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-5c8c884d57-hpscs to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
| (x6) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-4,config-4,controller-manager-kubeconfig-4,kube-controller-cert-syncer-kubeconfig-4,kube-controller-manager-pod-4,recycler-config-4,service-ca-4,serviceaccount-ca-4 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: configmaps: client-ca" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-4,config-4,controller-manager-kubeconfig-4,kube-controller-cert-syncer-kubeconfig-4,kube-controller-manager-pod-4,recycler-config-4,service-ca-4,serviceaccount-ca-4]" | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5c8c884d57 |
SuccessfulCreate |
Created pod: multus-admission-controller-5c8c884d57-hpscs | |
| (x2) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-76c7cdf7c8-mtp8c |
Started |
Started container openshift-controller-manager-operator |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, secrets: localhost-recovery-client-token-4]\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nNodeControllerDegraded: All master nodes are ready" to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-multus |
multus |
multus-admission-controller-5c8c884d57-hpscs |
AddedInterface |
Add eth0 [10.129.0.46/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-5c8c884d57-hpscs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcd2eac2f4a4060a04319748ae6123c9e8fa96dfd8e16c530be345b3434cc6e9" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-5c8c884d57-hpscs |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-5c8c884d57-hpscs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-5c8c884d57-hpscs |
Created |
Created container multus-admission-controller | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorVersionChanged |
clusteroperator/etcd version "etcd" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" | |
openshift-etcd |
static-pod-installer |
installer-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 1 | |
| (x2) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-76c7cdf7c8-mtp8c |
Created |
Created container openshift-controller-manager-operator |
openshift-multus |
kubelet |
multus-admission-controller-5c8c884d57-hpscs |
Created |
Created container kube-rbac-proxy | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: status.versions changed from [{"raw-internal" "4.16.0-0.nightly-2024-06-10-211334"}] to [{"raw-internal" "4.16.0-0.nightly-2024-06-10-211334"} {"etcd" "4.16.0-0.nightly-2024-06-10-211334"} {"operator" "4.16.0-0.nightly-2024-06-10-211334"}] | |
| (x2) | openshift-machine-api |
azure-controller |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
Updated |
Updated machine "ci-op-9xx71rvq-1e28e-w667k-master-0" |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorVersionChanged |
clusteroperator/etcd version "operator" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-4,config-4,controller-manager-kubeconfig-4,kube-controller-cert-syncer-kubeconfig-4,kube-controller-manager-pod-4,recycler-config-4,service-ca-4,serviceaccount-ca-4]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: configmaps: client-ca" | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-76c7cdf7c8-mtp8c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5147e93c2e576f931347a59e16d62590879b343d879632c7f0ba3c138cfa575b" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-6fc7977fb to 1 from 2 | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-5c8c884d57 to 2 from 1 | |
openshift-multus |
kubelet |
multus-admission-controller-5c8c884d57-hpscs |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-zpcvg |
Killing |
Stopping container kube-rbac-proxy | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/etcd-guard-pdb -n openshift-etcd because it was missing | |
openshift-multus |
default-scheduler |
multus-admission-controller-5c8c884d57-2b5ph |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-5c8c884d57-2b5ph to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-zpcvg |
Killing |
Stopping container multus-admission-controller | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-multus |
replicaset-controller |
multus-admission-controller-6fc7977fb |
SuccessfulDelete |
Deleted pod: multus-admission-controller-6fc7977fb-zpcvg | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5c8c884d57 |
SuccessfulCreate |
Created pod: multus-admission-controller-5c8c884d57-2b5ph | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
authentication-operator |
FastControllerResync |
Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling | |
| (x3) | openshift-apiserver |
controllermanager |
openshift-apiserver-pdb |
NoPods |
No matching pods found |
| (x3) | openshift-kube-scheduler |
controllermanager |
openshift-kube-scheduler-guard-pdb |
NoPods |
No matching pods found |
| (x3) | openshift-etcd |
controllermanager |
etcd-guard-pdb |
NoPods |
No matching pods found |
| (x3) | openshift-kube-controller-manager |
controllermanager |
kube-controller-manager-guard-pdb |
NoPods |
No matching pods found |
openshift-machine-api |
azure-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
Created |
Created machine "ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49" | |
| (x3) | openshift-oauth-apiserver |
controllermanager |
oauth-apiserver-pdb |
NoPods |
No matching pods found |
| (x3) | openshift-kube-apiserver |
controllermanager |
kube-apiserver-guard-pdb |
NoPods |
No matching pods found |
openshift-machine-api |
azure-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
Created |
Created machine "ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" | |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-5cd48fc5bd-w9jqv |
Created |
Created container openshift-config-operator |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-5cd48fc5bd-w9jqv |
Started |
Started container openshift-config-operator |
openshift-config-operator |
kubelet |
openshift-config-operator-5cd48fc5bd-w9jqv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f9b07f19aafce26ce2e4bbdd2468b5f5e79842eb97811bfa4d83395c98dd6c36" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodCreated |
Created Pod/etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-etcd because it was missing | |
openshift-machine-api |
azure-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
Created |
Created machine "ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9" | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" in 3.934s (3.934s including waiting) | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller |
authentication-operator |
SecretCreated |
Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container guard | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
multus |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.25/23] from ovn-kubernetes | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-5cd48fc5bd-w9jqv_49e3bbcc-3039-4964-bc27-622e010751e8 became leader | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container guard | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcd-ensure-env-vars | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveCloudProviderNamesChanges |
CloudProvider config file changed to /etc/kubernetes/static-pod-resources/configmaps/cloud-config/cloud.conf | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/cloud-config -n openshift-kube-apiserver: cause by changes in data.cloud.conf,data.config | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodUpdated |
Updated Pod/etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-etcd because it changed | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcd | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-9xx71rvq-1e28e-w667k-master-0 now has machineconfiguration.openshift.io/reason= | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcd-metrics | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-9xx71rvq-1e28e-w667k-master-0 now has machineconfiguration.openshift.io/state=Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
Uncordon |
Update completed for config rendered-master-3836bd588b1cc1c96287a7d6aef1e84e and node has been uncordoned | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberAddAsLearner |
successfully added new member https://10.0.0.8:2380 | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-master-0, currentConfig rendered-master-3836bd588b1cc1c96287a7d6aef1e84e to Done | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcd-readyz | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-3836bd588b1cc1c96287a7d6aef1e84e | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
Uncordon |
Update completed for config rendered-master-3836bd588b1cc1c96287a7d6aef1e84e and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-master-1, currentConfig rendered-master-3836bd588b1cc1c96287a7d6aef1e84e to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-3836bd588b1cc1c96287a7d6aef1e84e | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-9xx71rvq-1e28e-w667k-master-1 now has machineconfiguration.openshift.io/reason= | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-9xx71rvq-1e28e-w667k-master-1 now has machineconfiguration.openshift.io/state=Done | |
| (x13) | openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageFailed |
configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found |
openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.8:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ProbeError |
Readiness probe error: Get "https://10.0.0.8:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberPromote |
successfully promoted learner member https://10.0.0.8:2380 | |
| (x2) | openshift-operator-lifecycle-manager |
default-scheduler |
collect-profiles-28635045-pspjp |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageFailed |
configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://localhost:2379 | |
| (x7) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: client-ca |
| (x14) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageFailed |
configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveWebhookTokenAuthenticator |
authentication-token webhook configuration status changed from false to true | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "admission": map[string]any{ + "pluginConfig": map[string]any{ + "PodSecurity": map[string]any{"configuration": map[string]any{...}}, + "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, + "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, + }, + }, + "apiServerArguments": map[string]any{ + "api-audiences": []any{string("https://kubernetes.default.svc")}, + "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, + "authentication-token-webhook-version": []any{string("v1")}, + "cloud-config": []any{string("/etc/kubernetes/static-pod-resources/configmaps/cloud-config/clo"...)}, + "etcd-servers": []any{string("https://localhost:2379")}, + "feature-gates": []any{ + string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), + string("AutomatedEtcdBackup=false"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("CSIDriverSharedResource=false"), string("ChunkSizeMiB=false"), ..., + }, + "send-retry-after-while-not-ready-once": []any{string("false")}, + "service-account-issuer": []any{string("https://kubernetes.default.svc")}, + "service-account-jwks-uri": []any{string("https://api.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.c"...)}, + }, + "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, + "servicesSubnet": string("172.30.0.0/16"), + "servingInfo": map[string]any{ + "bindAddress": string("0.0.0.0:6443"), + "bindNetwork": string("tcp4"), + "cipherSuites": []any{ + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"), + }, + "minTLSVersion": string("VersionTLS12"), + "namedCertificates": []any{ + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-resou"...), + "keyFile": string("/etc/kubernetes/static-pod-resou"...), + }, + }, + }, } | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 0 to 1 because static pod is ready | |
| (x25) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMissing |
no observedConfig |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 1",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 1\nEtcdMembersAvailable: 1 members are available") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-7879b848d6-vbpk9 |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-7879b848d6-vbpk9 to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/etcd-endpoints has been created" | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7879b848d6 |
SuccessfulCreate |
Created pod: apiserver-7879b848d6-qdcqz | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.8:2379 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-7879b848d6 to 3 | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreateFailed |
Failed to create revision 1: configmap "kube-apiserver-pod" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-apiserver because it was missing | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7879b848d6 |
SuccessfulCreate |
Created pod: apiserver-7879b848d6-f9pgk | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: " map[string]any{\n \t\"apiServerArguments\": map[string]any{\n \t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n \t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\"etcd-servers\": []any{string(\"https://10.0.0.8:2379\")},\n \t\t\"tls-cipher-suites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...},\n \t\t\"tls-min-version\": string(\"VersionTLS12\"),\n \t},\n }\n" | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontrollerworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-7879b848d6-f9pgk |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-7879b848d6-f9pgk to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7879b848d6 |
SuccessfulCreate |
Created pod: apiserver-7879b848d6-vbpk9 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-7879b848d6-qdcqz |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-7879b848d6-qdcqz to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
| (x19) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.3213203613138f0c | |
openshift-oauth-apiserver |
multus |
apiserver-7879b848d6-vbpk9 |
AddedInterface |
Add eth0 [10.128.0.26/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-vbpk9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" | |
openshift-oauth-apiserver |
multus |
apiserver-7879b848d6-f9pgk |
AddedInterface |
Add eth0 [10.129.0.47/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-f9pgk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.29.5" | |
openshift-kube-scheduler |
static-pod-installer |
installer-4-ci-op-9xx71rvq-1e28e-w667k-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nNodeControllerDegraded: All master nodes are ready" to "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nNodeControllerDegraded: All master nodes are ready",status.versions changed from [{"raw-internal" "4.16.0-0.nightly-2024-06-10-211334"}] to [{"raw-internal" "4.16.0-0.nightly-2024-06-10-211334"} {"kube-scheduler" "1.29.5"} {"operator" "4.16.0-0.nightly-2024-06-10-211334"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "operator" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Created |
Created container kube-rbac-proxy-machine-mtrc | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing | |
openshift-marketplace |
kubelet |
community-operators-8x76m |
Created |
Created container extract-content | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Created |
Created container kube-rbac-proxy-machineset-mtrc | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Started |
Started container kube-rbac-proxy-machineset-mtrc | |
openshift-marketplace |
kubelet |
redhat-operators-ddg4k |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.16" in 39.433s (39.433s including waiting) | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-8x76m |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.16" in 38.411s (38.411s including waiting) | |
openshift-marketplace |
kubelet |
redhat-marketplace-pnlz7 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.16" in 38.39s (38.39s including waiting) | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Started |
Started container kube-rbac-proxy-machine-mtrc | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-ddg4k |
Started |
Started container extract-content | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-qdcqz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" | |
openshift-marketplace |
kubelet |
community-operators-8x76m |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-ddg4k |
Created |
Created container extract-content | |
openshift-oauth-apiserver |
multus |
apiserver-7879b848d6-qdcqz |
AddedInterface |
Add eth0 [10.130.0.26/23] from ovn-kubernetes | |
openshift-multus |
multus |
multus-admission-controller-5c8c884d57-2b5ph |
AddedInterface |
Add eth0 [10.130.0.25/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-5c8c884d57-2b5ph |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcd2eac2f4a4060a04319748ae6123c9e8fa96dfd8e16c530be345b3434cc6e9" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" | |
openshift-marketplace |
kubelet |
redhat-marketplace-pnlz7 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-pnlz7 |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-24rdr |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.16" in 39.391s (39.391s including waiting) | |
openshift-marketplace |
kubelet |
certified-operators-24rdr |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-24rdr |
Started |
Started container extract-content | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing | |
openshift-marketplace |
kubelet |
redhat-marketplace-pnlz7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-vbpk9 |
Started |
Started container fix-audit-permissions | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "InstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-7879b848d6-qdcqz pod, 2 containers are waiting in pending apiserver-7879b848d6-f9pgk pod, 2 containers are waiting in pending apiserver-7879b848d6-vbpk9 pod)",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") | |
openshift-marketplace |
kubelet |
certified-operators-24rdr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Created |
Created container kube-rbac-proxy-mhc-mtrc | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Started |
Started container kube-rbac-proxy-mhc-mtrc | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-vbpk9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" in 3.37s (3.37s including waiting) | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-f9pgk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" in 3.203s (3.203s including waiting) | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-f9pgk |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-f9pgk |
Started |
Started container fix-audit-permissions | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-2 -n openshift-etcd because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-vbpk9 |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-f9pgk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-vbpk9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-vbpk9 |
Created |
Created container oauth-apiserver | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-peer-client-ca-2 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: EvaluationConditionsDetected changed from Unknown to False ("All is well") | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-vbpk9 |
Started |
Started container oauth-apiserver | |
openshift-marketplace |
kubelet |
redhat-operators-ddg4k |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-f9pgk |
Created |
Created container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-f9pgk |
Started |
Started container oauth-apiserver | |
openshift-marketplace |
kubelet |
community-operators-8x76m |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-serving-ca-2 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-1" from revision 0 to 1 because node ci-op-9xx71rvq-1e28e-w667k-master-1 static pod not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ ... // 2 identical entries "authentication-token-webhook-version": []any{string("v1")}, "cloud-config": []any{string("/etc/kubernetes/static-pod-resources/configmaps/cloud-config/clo"...)}, "etcd-servers": []any{ + string("https://10.0.0.8:2379"), string("https://localhost:2379"), }, "feature-gates": []any{string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AutomatedEtcdBackup=false"), string("AzureWorkloadIdentity=true"), ...}, "send-retry-after-while-not-ready-once": []any{string("false")}, ... // 2 identical entries }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, } | |
openshift-kube-scheduler |
multus |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.27/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-7879b848d6-qdcqz pod, 2 containers are waiting in pending apiserver-7879b848d6-f9pgk pod, 2 containers are waiting in pending apiserver-7879b848d6-vbpk9 pod)" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-7879b848d6-vbpk9 pod, 2 containers are waiting in pending apiserver-7879b848d6-qdcqz pod, container is not ready in apiserver-7879b848d6-f9pgk pod)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.8:2379,https://localhost:2379 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container guard | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "InstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-client-ca-2 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-1-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-etcd because it was missing | |
| (x5) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container guard | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: configmaps: client-ca") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 5 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("ConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-7879b848d6-vbpk9 pod, 2 containers are waiting in pending apiserver-7879b848d6-qdcqz pod, container is not ready in apiserver-7879b848d6-f9pgk pod)" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-7879b848d6-f9pgk pod, container is not ready in apiserver-7879b848d6-vbpk9 pod, 2 containers are waiting in pending apiserver-7879b848d6-qdcqz pod)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-5 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded changed from False to True ("APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodUpdated |
Updated Pod/openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-scheduler because it changed | |
openshift-etcd |
kubelet |
installer-1-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-1-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container installer | |
openshift-etcd |
kubelet |
installer-1-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
multus |
installer-1-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.48/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.8:2379 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "ConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "ConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded changed from True to False ("APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: ") | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ ... // 2 identical entries "routingConfig": map[string]any{"subdomain": string("apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com")}, "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), ...}, "minTLSVersion": string("VersionTLS12")}, + "storageConfig": map[string]any{"urls": []any{string("https://10.0.0.8:2379")}}, } | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-5 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/etcd-pod has changed" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-7879b848d6-f9pgk pod, container is not ready in apiserver-7879b848d6-vbpk9 pod, 2 containers are waiting in pending apiserver-7879b848d6-qdcqz pod)" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nAPIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-7879b848d6-qdcqz pod)",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/cloud-config-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-5 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 1" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 1\nEtcdMembersAvailable: 1 members are available" to "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 2\nEtcdMembersAvailable: 1 members are available" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionCreate |
Revision 2 created because required configmap/etcd-endpoints has been created | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/etcd-endpoints has been created" | |
openshift-multus |
kubelet |
multus-admission-controller-5c8c884d57-2b5ph |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcd2eac2f4a4060a04319748ae6123c9e8fa96dfd8e16c530be345b3434cc6e9" in 9.16s (9.16s including waiting) | |
openshift-multus |
kubelet |
multus-admission-controller-5c8c884d57-2b5ph |
Created |
Created container multus-admission-controller | |
openshift-apiserver |
replicaset-controller |
apiserver-7847c9d86c |
SuccessfulCreate |
Created pod: apiserver-7847c9d86c-6gjp8 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-multus |
kubelet |
multus-admission-controller-5c8c884d57-2b5ph |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-5c8c884d57-2b5ph |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 3.") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: " to "All is well",Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 3.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" | |
openshift-apiserver |
default-scheduler |
apiserver-7847c9d86c-6gjp8 |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-7847c9d86c-6gjp8 to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-apiserver |
default-scheduler |
apiserver-7847c9d86c-p5qtd |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-7847c9d86c-p5qtd to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 5 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver |
default-scheduler |
apiserver-7847c9d86c-tzr6j |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-7847c9d86c-tzr6j to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-apiserver |
replicaset-controller |
apiserver-7847c9d86c |
SuccessfulCreate |
Created pod: apiserver-7847c9d86c-tzr6j | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-5 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver |
replicaset-controller |
apiserver-7847c9d86c |
SuccessfulCreate |
Created pod: apiserver-7847c9d86c-p5qtd | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-7847c9d86c to 3 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-5 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd |
kubelet |
installer-1-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-tzr6j |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-3 -n openshift-etcd because it was missing | |
openshift-apiserver |
multus |
apiserver-7847c9d86c-tzr6j |
AddedInterface |
Add eth0 [10.129.0.49/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreate |
Revision 5 created because required configmap/serviceaccount-ca has changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-p5qtd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" | |
openshift-apiserver |
multus |
apiserver-7847c9d86c-p5qtd |
AddedInterface |
Add eth0 [10.128.0.28/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" in 13.348s (13.348s including waiting) | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 5 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: configmaps: client-ca" to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 5" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-scheduler | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-scheduler-cert-syncer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "ConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 0 to 4 because node ci-op-9xx71rvq-1e28e-w667k-master-0 static pod not found | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-3 -n openshift-etcd because it was missing | |
| (x9) | openshift-route-controller-manager |
kubelet |
route-controller-manager-78b66d7c68-g6tds |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
| (x9) | openshift-route-controller-manager |
kubelet |
route-controller-manager-78b66d7c68-fjzpk |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-scheduler-recovery-controller | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-2-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-7847c9d86c-tzr6j pod, 3 containers are waiting in pending apiserver-7847c9d86c-6gjp8 pod, 3 containers are waiting in pending apiserver-7847c9d86c-p5qtd pod)",Progressing changed from True to False ("All is well") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-peer-client-ca-3 -n openshift-etcd because it was missing | |
| (x9) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
| (x9) | openshift-controller-manager |
kubelet |
controller-manager-5c89cb9bc9-j9bzk |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-serving-ca-3 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-p5qtd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-p5qtd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-p5qtd |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-p5qtd |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-p5qtd |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-p5qtd |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-p5qtd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" in 4.481s (4.481s including waiting) | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-client-ca-3 -n openshift-etcd because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-7847c9d86c-tzr6j pod, 3 containers are waiting in pending apiserver-7847c9d86c-6gjp8 pod, 3 containers are waiting in pending apiserver-7847c9d86c-p5qtd pod)" to "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-7847c9d86c-tzr6j pod, 3 containers are waiting in pending apiserver-7847c9d86c-6gjp8 pod, 2 containers are waiting in pending apiserver-7847c9d86c-p5qtd pod)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing | |
| (x9) | openshift-controller-manager |
kubelet |
controller-manager-58c5c594b9-s5vgm |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-etcd |
multus |
installer-2-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.50/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-tzr6j |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" in 7.76s (7.76s including waiting) | |
openshift-etcd |
kubelet |
installer-2-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container installer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 5 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-3 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd |
kubelet |
installer-2-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container installer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionCreate |
Revision 5 created because required configmap/serviceaccount-ca has changed | |
openshift-etcd |
kubelet |
installer-2-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-tzr6j |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-tzr6j |
Created |
Created container fix-audit-permissions | |
openshift-kube-controller-manager |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-tzr6j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-tzr6j |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-tzr6j |
Started |
Started container openshift-apiserver | |
openshift-kube-controller-manager |
multus |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.29/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-tzr6j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-1 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-3 -n openshift-etcd because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-7847c9d86c-tzr6j pod, 3 containers are waiting in pending apiserver-7847c9d86c-6gjp8 pod, 2 containers are waiting in pending apiserver-7847c9d86c-p5qtd pod)" to "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-7847c9d86c-p5qtd pod, 2 containers are waiting in pending apiserver-7847c9d86c-tzr6j pod, 3 containers are waiting in pending apiserver-7847c9d86c-6gjp8 pod)" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-node namespace | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreate |
Revision 1 created because configmap "kube-apiserver-pod-0" not found | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-p5qtd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" in 3.746s (3.746s including waiting) | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-p5qtd |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-tzr6j |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 5" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift namespace | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-tzr6j |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-p5qtd |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/config has changed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/etcd-pod has changed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionCreate |
Revision 3 created because required configmap/etcd-pod has changed | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-7847c9d86c-p5qtd pod, 2 containers are waiting in pending apiserver-7847c9d86c-tzr6j pod, 3 containers are waiting in pending apiserver-7847c9d86c-6gjp8 pod)" to "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-7847c9d86c-tzr6j pod, 3 containers are waiting in pending apiserver-7847c9d86c-6gjp8 pod, container is not ready in apiserver-7847c9d86c-p5qtd pod)" | |
openshift-kube-controller-manager |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" in 3.489s (3.489s including waiting) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-samples-operator |
replicaset-controller |
cluster-samples-operator-85fcdf6c4c |
SuccessfulCreate |
Created pod: cluster-samples-operator-85fcdf6c4c-v4whj | |
openshift-cluster-samples-operator |
default-scheduler |
cluster-samples-operator-85fcdf6c4c-v4whj |
Scheduled |
Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-85fcdf6c4c-v4whj to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-cluster-samples-operator |
deployment-controller |
cluster-samples-operator |
ScalingReplicaSet |
Scaled up replica set cluster-samples-operator-85fcdf6c4c to 1 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 2\nEtcdMembersAvailable: 1 members are available" to "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 3\nEtcdMembersAvailable: 1 members are available" | |
openshift-kube-controller-manager |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-kube-scheduler |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-24rdr |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-pnlz7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 25.446s (25.446s including waiting) | |
openshift-kube-scheduler |
multus |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.30/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-24rdr |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-24rdr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 25.17s (25.17s including waiting) | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-qdcqz |
Started |
Started container fix-audit-permissions | |
openshift-kube-scheduler |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-apiserver |
multus |
apiserver-7847c9d86c-6gjp8 |
AddedInterface |
Add eth0 [10.130.0.27/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-8x76m |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 24.283s (24.283s including waiting) | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-qdcqz |
Created |
Created container fix-audit-permissions | |
openshift-kube-scheduler |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-6gjp8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-qdcqz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" in 25.882s (25.882s including waiting) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing | |
openshift-multus |
kubelet |
multus-admission-controller-5c8c884d57-2b5ph |
Started |
Started container kube-rbac-proxy | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from False to True ("APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-7879b848d6-qdcqz pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready") | |
openshift-multus |
kubelet |
multus-admission-controller-5c8c884d57-2b5ph |
Created |
Created container kube-rbac-proxy | |
openshift-cluster-samples-operator |
multus |
cluster-samples-operator-85fcdf6c4c-v4whj |
AddedInterface |
Add eth0 [10.130.0.28/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-ddg4k |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 24.205s (24.205s including waiting) | |
openshift-marketplace |
kubelet |
community-operators-8x76m |
Created |
Created container registry-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 0 to 1 because node ci-op-9xx71rvq-1e28e-w667k-master-0 static pod not found | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-6fc7977fb to 0 from 1 | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85fcdf6c4c-v4whj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a221a4b48a517ad29dce2fd98a900a21503bb1157bd43949d9698dd372ee7f5e" | |
openshift-multus |
replicaset-controller |
multus-admission-controller-6fc7977fb |
SuccessfulDelete |
Deleted pod: multus-admission-controller-6fc7977fb-4v6xp | |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-4v6xp |
Killing |
Stopping container kube-rbac-proxy | |
openshift-marketplace |
kubelet |
community-operators-8x76m |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-ddg4k |
Started |
Started container registry-server | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-qdcqz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-6fc7977fb-4v6xp |
Killing |
Stopping container multus-admission-controller | |
openshift-marketplace |
kubelet |
redhat-marketplace-pnlz7 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-pnlz7 |
Created |
Created container registry-server | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-7879b848d6-qdcqz pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is waiting in pending apiserver-7879b848d6-qdcqz pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-qdcqz |
Created |
Created container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-qdcqz |
Started |
Started container oauth-apiserver | |
openshift-marketplace |
kubelet |
redhat-operators-ddg4k |
Created |
Created container registry-server | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-7847c9d86c-tzr6j pod, 3 containers are waiting in pending apiserver-7847c9d86c-6gjp8 pod, container is not ready in apiserver-7847c9d86c-p5qtd pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-7847c9d86c-6gjp8 pod)",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is waiting in pending apiserver-7879b848d6-qdcqz pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-7879b848d6-qdcqz pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
installer-2-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" | |
openshift-marketplace |
kubelet |
redhat-operators-ddg4k |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/cloud-config-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85fcdf6c4c-v4whj |
Started |
Started container cluster-samples-operator | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-6gjp8 |
Started |
Started container fix-audit-permissions | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85fcdf6c4c-v4whj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a221a4b48a517ad29dce2fd98a900a21503bb1157bd43949d9698dd372ee7f5e" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-6gjp8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" in 5.663s (5.663s including waiting) | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85fcdf6c4c-v4whj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a221a4b48a517ad29dce2fd98a900a21503bb1157bd43949d9698dd372ee7f5e" in 5.528s (5.528s including waiting) | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85fcdf6c4c-v4whj |
Created |
Created container cluster-samples-operator | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-6gjp8 |
Created |
Created container fix-audit-permissions | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85fcdf6c4c-v4whj |
Created |
Created container cluster-samples-operator-watch | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-6gjp8 |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-6gjp8 |
Started |
Started container openshift-apiserver | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.16.0-0.nightly-2024-06-10-211334"}] to [{"operator" "4.16.0-0.nightly-2024-06-10-211334"} {"oauth-apiserver" "4.16.0-0.nightly-2024-06-10-211334"}] | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-85fcdf6c4c-v4whj |
Started |
Started container cluster-samples-operator-watch | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-6gjp8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-6gjp8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" | |
openshift-cluster-samples-operator |
file-change-watchdog |
cluster-samples-operator |
FileChangeWatchdogStarted |
Started watching files for process cluster-samples-operator[7] | |
openshift-etcd |
multus |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.51/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-ingress |
service-controller |
router-default |
EnsuredLoadBalancer |
Ensured load balancer | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-7847c9d86c-6gjp8 pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-7847c9d86c-6gjp8 pod)" | |
| (x9) | openshift-controller-manager |
kubelet |
controller-manager-6d46446fb6-s4zxm |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-3-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-etcd because it was missing | |
openshift-etcd |
kubelet |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
kubelet |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-1-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container installer | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-6gjp8 |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-6gjp8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" in 2.877s (2.877s including waiting) | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-6gjp8 |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
installer-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
multus |
installer-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.31/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-7847c9d86c-6gjp8 pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-7847c9d86c-6gjp8 pod)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-9xx71rvq-1e28e-w667k-master-2 now has machineconfiguration.openshift.io/state=Done | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-9xx71rvq-1e28e-w667k-master-2 now has machineconfiguration.openshift.io/reason= | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-master-2, currentConfig rendered-master-3836bd588b1cc1c96287a7d6aef1e84e to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-3836bd588b1cc1c96287a7d6aef1e84e | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
Uncordon |
Update completed for config rendered-master-3836bd588b1cc1c96287a7d6aef1e84e and node has been uncordoned | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-f9pgk |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [-]etcd-readiness failed: reason withheld [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [+]shutdown ok readyz check failed | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-qdcqz |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [-]etcd-readiness failed: reason withheld [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [+]shutdown ok readyz check failed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreate |
Revision 2 created because required configmap/config has changed | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-7879b848d6-qdcqz pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.16.0-0.nightly-2024-06-10-211334"}] to [{"operator" "4.16.0-0.nightly-2024-06-10-211334"} {"openshift-apiserver" "4.16.0-0.nightly-2024-06-10-211334"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-7847c9d86c-6gjp8 pod)" to "All is well" | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config-operator version changed from [] to [{operator 4.16.0-0.nightly-2024-06-10-211334}] | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-7879b848d6-qdcqz pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" | |
openshift-kube-apiserver |
kubelet |
installer-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container installer | |
| (x9) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6c7c85d5db-pk6hs |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]") | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
openshift-kube-controller-manager |
static-pod-installer |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.29.5" | |
openshift-kube-scheduler |
static-pod-installer |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "operator" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.16.0-0.nightly-2024-06-10-211334"}] to [{"raw-internal" "4.16.0-0.nightly-2024-06-10-211334"} {"kube-controller-manager" "1.29.5"} {"operator" "4.16.0-0.nightly-2024-06-10-211334"}] | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89206cb191ea89871d18b482edd9417d13327fab7091ed43293046345c80c3d7" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-2-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodCreated |
Created Pod/kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver |
multus |
installer-2-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.32/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-2-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89206cb191ea89871d18b482edd9417d13327fab7091ed43293046345c80c3d7" in 3.377s (3.377s including waiting) | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-2-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-controller-manager-recovery-controller | |
openshift-kube-apiserver |
kubelet |
installer-2-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-0_2f6ee113-a230-4d38-ae24-82c954e19d99 became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container guard | |
openshift-kube-controller-manager |
multus |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.33/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container guard | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodUpdated |
Updated Pod/kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-controller-manager because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-canary namespace | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container wait-for-host-port | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-0_ccc2ba33-d0da-4b0f-9dd8-ac328e3d5462 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
openshift-etcd |
static-pod-installer |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 3 | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing PodIP in operand etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 5",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 5") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 0 to 5 because static pod is ready | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 3\nEtcdMembersAvailable: 1 members are available" to "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 3\nEtcdMembersAvailable: 2 members are available" | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" in 3.632s (3.632s including waiting) | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container setup | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-7kvwj |
ProbeError |
Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body: |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container setup | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-1" from revision 0 to 5 because node ci-op-9xx71rvq-1e28e-w667k-master-1 static pod not found | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-7kvwj |
Unhealthy |
Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcd-resources-copy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-5-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcd-metrics | |
openshift-kube-controller-manager |
multus |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.52/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcdctl | |
openshift-kube-controller-manager |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcd-readyz | |
openshift-kube-controller-manager |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container installer | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcd-metrics | |
openshift-kube-controller-manager |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberAddAsLearner |
successfully added new member https://10.0.0.6:2380 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodCreated |
Created Pod/etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing PodIP in operand etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container guard | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container guard | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
multus |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.53/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberPromote |
successfully promoted learner member https://10.0.0.6:2380 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodUpdated |
Updated Pod/etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-etcd because it changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from False to True ("GuardControllerDegraded: Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-1" from revision 0 to 3 because static pod is ready | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 3",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 3\nEtcdMembersAvailable: 2 members are available" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 3\nEtcdMembersAvailable: 2 members are available" | |
openshift-machine-api |
azure-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
Updated |
Updated machine "ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49" | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6777f8cb5c to 1 from 0 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7879b848d6 |
SuccessfulDelete |
Deleted pod: apiserver-7879b848d6-qdcqz | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.7d736d9464ec5c19 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/etcd-endpoints has changed" | |
| (x3) | openshift-machine-api |
azure-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
Updated |
Updated machine "ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9" |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6777f8cb5c |
SuccessfulCreate |
Created pod: apiserver-6777f8cb5c-cl69q | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.6:2379,https://10.0.0.8:2379 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: " map[string]any{\n \t\"apiServerArguments\": map[string]any{\n \t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n \t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\t\"etcd-servers\": []any{\n+ \t\t\tstring(\"https://10.0.0.6:2379\"),\n \t\t\tstring(\"https://10.0.0.8:2379\"),\n \t\t},\n \t\t\"tls-cipher-suites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...},\n \t\t\"tls-min-version\": string(\"VersionTLS12\"),\n \t},\n }\n" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 1, desired generation is 2.") | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-7879b848d6 to 2 from 3 | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.6:2379,https://10.0.0.8:2379,https://localhost:2379 |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-qdcqz |
Killing |
Stopping container oauth-apiserver | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ ... // 2 identical entries "authentication-token-webhook-version": []any{string("v1")}, "cloud-config": []any{string("/etc/kubernetes/static-pod-resources/configmaps/cloud-config/clo"...)}, "etcd-servers": []any{ + string("https://10.0.0.6:2379"), string("https://10.0.0.8:2379"), string("https://localhost:2379"), }, "feature-gates": []any{string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AutomatedEtcdBackup=false"), string("AzureWorkloadIdentity=true"), ...}, "send-retry-after-while-not-ready-once": []any{string("false")}, ... // 2 identical entries }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, } |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-4 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-4 -n openshift-etcd because it was missing | |
| (x2) | openshift-machine-api |
azure-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
Updated |
Updated machine "ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/cloud-config-3 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-peer-client-ca-4 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-2" from revision 0 to 3 because node ci-op-9xx71rvq-1e28e-w667k-master-2 static pod not found | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-serving-ca-4 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-client-ca-4 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-4 -n openshift-etcd because it was missing | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container setup | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "GuardControllerDegraded: [Missing PodIP in operand kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-4 -n openshift-etcd because it was missing | |
openshift-etcd |
multus |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.29/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-3-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-etcd because it was missing | |
openshift-kube-apiserver |
static-pod-installer |
installer-2-ci-op-9xx71rvq-1e28e-w667k-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 2 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/etcd-endpoints has changed" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-check-endpoints | |
| (x2) | openshift-oauth-apiserver |
default-scheduler |
apiserver-6777f8cb5c-cl69q |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionCreate |
Revision 4 created because required configmap/etcd-endpoints has changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 5 triggered by "required configmap/etcd-pod has changed" | |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-qdcqz |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-qdcqz |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-etcd |
kubelet |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" in 3.078s (3.078s including waiting) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 3" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 3; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 3\nEtcdMembersAvailable: 2 members are available" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 3; 0 nodes have achieved new revision 4\nEtcdMembersAvailable: 2 members are available" | |
openshift-etcd |
kubelet |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container installer | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-5 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.29.5" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "operator" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.16.0-0.nightly-2024-06-10-211334"}] to [{"raw-internal" "4.16.0-0.nightly-2024-06-10-211334"} {"kube-apiserver" "1.29.5"} {"operator" "4.16.0-0.nightly-2024-06-10-211334"}] | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-5 -n openshift-etcd because it was missing | |
openshift-kube-apiserver |
multus |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.34/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-qdcqz |
Unhealthy |
Readiness probe failed: Get "https://10.130.0.26:8443/readyz": dial tcp 10.130.0.26:8443: connect: connection refused | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-qdcqz |
ProbeError |
Readiness probe error: Get "https://10.130.0.26:8443/readyz": dial tcp 10.130.0.26:8443: connect: connection refused body: | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 on node ci-op-9xx71rvq-1e28e-w667k-master-0, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
openshift-etcd |
kubelet |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Killing |
Stopping container installer | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-6777f8cb5c-cl69q |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-6777f8cb5c-cl69q to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container guard | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container guard | |
openshift-kube-controller-manager |
static-pod-installer |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-oauth-apiserver |
multus |
apiserver-6777f8cb5c-cl69q |
AddedInterface |
Add eth0 [10.130.0.30/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-cl69q |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-cl69q |
Created |
Created container fix-audit-permissions | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-peer-client-ca-5 -n openshift-etcd because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-cl69q |
Started |
Started container fix-audit-permissions | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-0_103a1422-aead-461c-9392-ea7cc4b2079c became leader | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-cl69q |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-cl69q |
Started |
Started container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-cl69q |
Created |
Created container oauth-apiserver | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-78b66d7c68 to 1 from 2 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-78b66d7c68 to 0 from 1 | |
openshift-controller-manager |
default-scheduler |
controller-manager-7cfc668fc8-d2fkd |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-78b66d7c68 |
SuccessfulDelete |
Deleted pod: route-controller-manager-78b66d7c68-fjzpk | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-78b66d7c68 |
SuccessfulDelete |
Deleted pod: route-controller-manager-78b66d7c68-g6tds | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 5",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 5") | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7cfc668fc8 |
SuccessfulCreate |
Created pod: controller-manager-7cfc668fc8-d2fkd | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-serving-ca-5 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 0 to 5 because static pod is ready | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-67956fc655 |
SuccessfulCreate |
Created pod: route-controller-manager-67956fc655-w4vck | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-d8cbffd66 to 1 from 0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-controller-manager |
replicaset-controller |
controller-manager-58c5c594b9 |
SuccessfulDelete |
Deleted pod: controller-manager-58c5c594b9-s5vgm | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5c89cb9bc9 |
SuccessfulDelete |
Deleted pod: controller-manager-5c89cb9bc9-j9bzk | |
openshift-controller-manager |
replicaset-controller |
controller-manager-d8cbffd66 |
SuccessfulCreate |
Created pod: controller-manager-d8cbffd66-vbf7r | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-5c89cb9bc9 to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-6d7d8b6854 to 1 from 0 | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-67956fc655-w4vck |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-67956fc655 to 1 from 0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-76c7cdf7c8-mtp8c_ebdc0604-6c8f-45c5-b31f-b55f2abfefd9 became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6d7d8b6854-dlnkl |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | |
| (x2) | openshift-controller-manager |
default-scheduler |
controller-manager-d8cbffd66-vbf7r |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-1" from revision 0 to 5 because node ci-op-9xx71rvq-1e28e-w667k-master-1 static pod not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-67956fc655-w4vck |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "route-controller-manager-67956fc655-w4vck": pod route-controller-manager-67956fc655-w4vck is already assigned to node "ci-op-9xx71rvq-1e28e-w667k-master-2" | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6d7d8b6854 |
SuccessfulCreate |
Created pod: route-controller-manager-6d7d8b6854-dlnkl | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing | |
openshift-route-controller-manager |
multus |
route-controller-manager-6d7d8b6854-dlnkl |
AddedInterface |
Add eth0 [10.129.0.54/23] from ovn-kubernetes | |
openshift-route-controller-manager |
multus |
route-controller-manager-67956fc655-w4vck |
AddedInterface |
Add eth0 [10.130.0.31/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67956fc655-w4vck |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5484eee39d22c97ef8b258c63a00940d97593abc951acad7aec3117e1d65019" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-client-ca-5 -n openshift-etcd because it was missing | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d7d8b6854-dlnkl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5484eee39d22c97ef8b258c63a00940d97593abc951acad7aec3117e1d65019" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodUpdated |
Updated Pod/kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-apiserver because it changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-4-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-etcd because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-f9pgk |
Killing |
Stopping container oauth-apiserver | |
openshift-controller-manager |
multus |
controller-manager-7cfc668fc8-d2fkd |
AddedInterface |
Add eth0 [10.128.0.35/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-5-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-5 -n openshift-etcd because it was missing | |
openshift-controller-manager |
kubelet |
controller-manager-7cfc668fc8-d2fkd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36405aaf37dd3a4676764e25cebf2d0832944a3b96cc5c3b93ec896d0af969f3" | |
openshift-etcd |
kubelet |
installer-4-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container installer | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-7879b848d6 to 1 from 2 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7879b848d6 |
SuccessfulDelete |
Deleted pod: apiserver-7879b848d6-f9pgk | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6777f8cb5c to 2 from 1 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6777f8cb5c |
SuccessfulCreate |
Created pod: apiserver-6777f8cb5c-bcmz4 | |
openshift-kube-scheduler |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container installer | |
openshift-etcd |
multus |
installer-4-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.32/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-4-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
kubelet |
installer-4-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container installer | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-5 -n openshift-etcd because it was missing | |
openshift-kube-scheduler |
multus |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.55/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-d8cbffd66-vbf7r |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36405aaf37dd3a4676764e25cebf2d0832944a3b96cc5c3b93ec896d0af969f3" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67956fc655-w4vck |
Created |
Created container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67956fc655-w4vck |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5484eee39d22c97ef8b258c63a00940d97593abc951acad7aec3117e1d65019" in 3.04s (3.041s including waiting) | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67956fc655-w4vck |
Started |
Started container route-controller-manager | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionTriggered |
new revision 5 triggered by "required configmap/etcd-pod has changed" | |
openshift-controller-manager |
multus |
controller-manager-d8cbffd66-vbf7r |
AddedInterface |
Add eth0 [10.130.0.33/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionCreate |
Revision 5 created because required configmap/etcd-pod has changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreate |
Revision 3 created because required configmap/config has changed | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-67956fc655-w4vck_ebb690ea-ebbf-4238-bb04-75a900abedcf became leader | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d7d8b6854-dlnkl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5484eee39d22c97ef8b258c63a00940d97593abc951acad7aec3117e1d65019" in 3.228s (3.228s including waiting) | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d7d8b6854-dlnkl |
Started |
Started container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d7d8b6854-dlnkl |
Created |
Created container route-controller-manager | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6d7d8b6854-qjgq9 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 3; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 3; 0 nodes have achieved new revision 4\nEtcdMembersAvailable: 2 members are available" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 3; 0 nodes have achieved new revision 5\nEtcdMembersAvailable: 2 members are available" | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6d7d8b6854 |
SuccessfulCreate |
Created pod: route-controller-manager-6d7d8b6854-qjgq9 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodCreated |
Created Pod/kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "GuardControllerDegraded: Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-6c7c85d5db to 0 from 1 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6c7c85d5db |
SuccessfulDelete |
Deleted pod: route-controller-manager-6c7c85d5db-pk6hs | |
openshift-controller-manager |
kubelet |
controller-manager-7cfc668fc8-d2fkd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36405aaf37dd3a4676764e25cebf2d0832944a3b96cc5c3b93ec896d0af969f3" in 4.31s (4.31s including waiting) | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-6d7d8b6854 to 2 from 1 | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7cfc668fc8-d2fkd became leader | |
openshift-controller-manager |
kubelet |
controller-manager-d8cbffd66-vbf7r |
Created |
Created container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-d8cbffd66-vbf7r |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36405aaf37dd3a4676764e25cebf2d0832944a3b96cc5c3b93ec896d0af969f3" in 3.454s (3.454s including waiting) | |
openshift-controller-manager |
kubelet |
controller-manager-d8cbffd66-vbf7r |
Started |
Started container controller-manager | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6d46446fb6 |
SuccessfulDelete |
Deleted pod: controller-manager-6d46446fb6-s4zxm | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7cfc668fc8 |
SuccessfulCreate |
Created pod: controller-manager-7cfc668fc8-mplwz | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6d7d8b6854-qjgq9 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6d7d8b6854-qjgq9 to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-route-controller-manager |
multus |
route-controller-manager-6d7d8b6854-qjgq9 |
AddedInterface |
Add eth0 [10.128.0.36/23] from ovn-kubernetes | |
| (x2) | openshift-controller-manager |
default-scheduler |
controller-manager-7cfc668fc8-mplwz |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d7d8b6854-qjgq9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5484eee39d22c97ef8b258c63a00940d97593abc951acad7aec3117e1d65019" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 3" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-etcd |
kubelet |
installer-4-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Killing |
Stopping container installer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" in 13.034s (13.034s including waiting) | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container guard | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container guard | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89206cb191ea89871d18b482edd9417d13327fab7091ed43293046345c80c3d7" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-controller-manager |
multus |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.56/23] from ovn-kubernetes | |
openshift-controller-manager |
multus |
controller-manager-7cfc668fc8-mplwz |
AddedInterface |
Add eth0 [10.129.0.57/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-7cfc668fc8-mplwz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36405aaf37dd3a4676764e25cebf2d0832944a3b96cc5c3b93ec896d0af969f3" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d7d8b6854-qjgq9 |
Created |
Created container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d7d8b6854-qjgq9 |
Started |
Started container route-controller-manager | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-5-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-etcd because it was missing | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d7d8b6854-qjgq9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5484eee39d22c97ef8b258c63a00940d97593abc951acad7aec3117e1d65019" in 3.288s (3.288s including waiting) | |
openshift-etcd |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container installer | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6d7d8b6854-9jxht |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67956fc655-w4vck |
Killing |
Stopping container route-controller-manager | |
| (x2) | openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
(combined from similar events): Scaled up replica set route-controller-manager-6d7d8b6854 to 3 from 2 |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-67956fc655 |
SuccessfulDelete |
Deleted pod: route-controller-manager-67956fc655-w4vck | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2, Unable to apply pod kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 changes: Operation cannot be fulfilled on pods \"kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-1\": the object has been modified; please apply your changes to the latest version and try again]\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodUpdateFailed |
Failed to update Pod/kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-controller-manager: Operation cannot be fulfilled on pods "kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-1": the object has been modified; please apply your changes to the latest version and try again | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6d7d8b6854 |
SuccessfulCreate |
Created pod: route-controller-manager-6d7d8b6854-9jxht | |
openshift-etcd |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container installer | |
openshift-etcd |
multus |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.34/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
| (x4) | openshift-oauth-apiserver |
default-scheduler |
apiserver-6777f8cb5c-bcmz4 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. |
openshift-controller-manager |
kubelet |
controller-manager-7cfc668fc8-mplwz |
Created |
Created container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-7cfc668fc8-mplwz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36405aaf37dd3a4676764e25cebf2d0832944a3b96cc5c3b93ec896d0af969f3" in 4.593s (4.593s including waiting) | |
openshift-controller-manager |
kubelet |
controller-manager-7cfc668fc8-mplwz |
Started |
Started container controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89206cb191ea89871d18b482edd9417d13327fab7091ed43293046345c80c3d7" in 4.394s (4.394s including waiting) | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container cluster-policy-controller | |
openshift-route-controller-manager |
multus |
route-controller-manager-6d7d8b6854-9jxht |
AddedInterface |
Add eth0 [10.130.0.35/23] from ovn-kubernetes | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7cfc668fc8 |
SuccessfulCreate |
Created pod: controller-manager-7cfc668fc8-xtcks | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-controller-manager |
replicaset-controller |
controller-manager-d8cbffd66 |
SuccessfulDelete |
Deleted pod: controller-manager-d8cbffd66-vbf7r | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-6d7d8b6854-9jxht_de2a442e-ea36-4921-a28e-c1b1b8086397 became leader | |
openshift-controller-manager |
kubelet |
controller-manager-d8cbffd66-vbf7r |
Killing |
Stopping container controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-controller-manager-cert-syncer | |
| (x6) | openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
(combined from similar events): Scaled up replica set controller-manager-7cfc668fc8 to 3 from 2 |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-7df985cbf9-f4swj_2e374742-fb61-4dd2-80de-d35b401e2efb became leader | |
openshift-controller-manager |
default-scheduler |
controller-manager-7cfc668fc8-xtcks |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-f9pgk |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-f9pgk |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-controller-manager |
kubelet |
controller-manager-7cfc668fc8-xtcks |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36405aaf37dd3a4676764e25cebf2d0832944a3b96cc5c3b93ec896d0af969f3" already present on machine | |
openshift-controller-manager |
multus |
controller-manager-7cfc668fc8-xtcks |
AddedInterface |
Add eth0 [10.130.0.36/23] from ovn-kubernetes | |
| (x2) | openshift-network-diagnostics |
default-scheduler |
network-check-source-775df55c85-86pxw |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. |
kube-system |
Required control plane pods have been created | ||||
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodUpdated |
Updated Pod/kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-controller-manager because it changed | |
openshift-controller-manager |
kubelet |
controller-manager-7cfc668fc8-xtcks |
Started |
Started container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-7cfc668fc8-xtcks |
Created |
Created container controller-manager | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.16.0-0.nightly-2024-06-10-211334"}] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "GuardControllerDegraded: Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 3; 0 nodes have achieved new revision 5\nEtcdMembersAvailable: 2 members are available" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 3; 0 nodes have achieved new revision 5\nEtcdMembersAvailable: 3 members are available" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2, Unable to apply pod kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 changes: Operation cannot be fulfilled on pods \"kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-1\": the object has been modified; please apply your changes to the latest version and try again]\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "GuardControllerDegraded: Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-6777f8cb5c-bcmz4 |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-6777f8cb5c-bcmz4 to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-bcmz4 |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-bcmz4 |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-bcmz4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" already present on machine | |
openshift-oauth-apiserver |
multus |
apiserver-6777f8cb5c-bcmz4 |
AddedInterface |
Add eth0 [10.129.0.58/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-bcmz4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-bcmz4 |
Created |
Created container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-bcmz4 |
Started |
Started container oauth-apiserver | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x2) | openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ ... // 2 identical entries "routingConfig": map[string]any{"subdomain": string("apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com")}, "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), ...}, "minTLSVersion": string("VersionTLS12")}, "storageConfig": map[string]any{ "urls": []any{ + string("https://10.0.0.6:2379"), string("https://10.0.0.8:2379"), }, }, } |
| (x2) | openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.6:2379,https://10.0.0.8:2379 |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-6gjp8 |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6777f8cb5c to 3 from 2 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4." | |
openshift-apiserver |
replicaset-controller |
apiserver-7c577f45d7 |
SuccessfulCreate |
Created pod: apiserver-7c577f45d7-bp26v | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4.") | |
openshift-apiserver |
replicaset-controller |
apiserver-7847c9d86c |
SuccessfulDelete |
Deleted pod: apiserver-7847c9d86c-6gjp8 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6777f8cb5c |
SuccessfulCreate |
Created pod: apiserver-6777f8cb5c-jj8xw | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-7847c9d86c to 2 from 3 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-7c577f45d7 to 1 from 0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-7879b848d6 to 0 from 1 | |
openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-vbpk9 |
Killing |
Stopping container oauth-apiserver | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-6gjp8 |
Killing |
Stopping container openshift-apiserver | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7879b848d6 |
SuccessfulDelete |
Deleted pod: apiserver-7879b848d6-vbpk9 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 5" to "NodeInstallerProgressing: 1 node is at revision 0; 2 nodes are at revision 5",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 5" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 5" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-1" from revision 0 to 5 because static pod is ready | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-7879b848d6-vbpk9 pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-kube-scheduler |
static-pod-installer |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container wait-for-host-port | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-2" from revision 0 to 5 because node ci-op-9xx71rvq-1e28e-w667k-master-2 static pod not found | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
| (x2) | openshift-apiserver |
default-scheduler |
apiserver-7c577f45d7-bp26v |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-scheduler-recovery-controller | |
| (x40) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 10.0.0.5 |
openshift-kube-controller-manager |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" | |
openshift-kube-controller-manager |
multus |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.37/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "GuardControllerDegraded: Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-5-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
multus |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.59/23] from ovn-kubernetes | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-7847c9d86c-6gjp8 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-7847c9d86c-6gjp8 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-vbpk9 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-7879b848d6-vbpk9 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container guard | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container guard | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-3-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" in 2.99s (2.99s including waiting) | |
openshift-kube-apiserver |
kubelet |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
multus |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.37/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
| (x3) | openshift-oauth-apiserver |
default-scheduler |
apiserver-6777f8cb5c-jj8xw |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. |
openshift-kube-controller-manager |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container installer | |
openshift-apiserver |
default-scheduler |
apiserver-7c577f45d7-bp26v |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-7c577f45d7-bp26v to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-kube-controller-manager |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container installer | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-7879b848d6-vbpk9 pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in terminated apiserver-7879b848d6-vbpk9 pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-apiserver |
kubelet |
apiserver-7c577f45d7-bp26v |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-7c577f45d7-bp26v |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-7c577f45d7-bp26v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine | |
openshift-apiserver |
multus |
apiserver-7c577f45d7-bp26v |
AddedInterface |
Add eth0 [10.130.0.38/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodUpdateFailed |
Failed to update Pod/openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-scheduler: Operation cannot be fulfilled on pods "openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-1": the object has been modified; please apply your changes to the latest version and try again | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in terminated apiserver-7879b848d6-vbpk9 pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-apiserver |
kubelet |
apiserver-7c577f45d7-bp26v |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-7c577f45d7-bp26v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-7c577f45d7-bp26v |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2" to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2, Unable to apply pod openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 changes: Operation cannot be fulfilled on pods \"openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-1\": the object has been modified; please apply your changes to the latest version and try again]" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-apiserver |
kubelet |
apiserver-7c577f45d7-bp26v |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-7c577f45d7-bp26v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-7c577f45d7-bp26v |
Created |
Created container openshift-apiserver | |
kube-system |
Required control plane pods have been created | ||||
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-6777f8cb5c-jj8xw pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-oauth-apiserver |
multus |
apiserver-6777f8cb5c-jj8xw |
AddedInterface |
Add eth0 [10.128.0.38/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-jj8xw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-jj8xw |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-jj8xw |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-jj8xw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodUpdated |
Updated Pod/openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-scheduler because it changed | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-jj8xw |
Started |
Started container oauth-apiserver | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2, Unable to apply pod openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 changes: Operation cannot be fulfilled on pods \"openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-1\": the object has been modified; please apply your changes to the latest version and try again]" to "GuardControllerDegraded: Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2" | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-jj8xw |
Created |
Created container oauth-apiserver | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-6777f8cb5c-jj8xw pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-6777f8cb5c-jj8xw pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-6777f8cb5c-jj8xw pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2" to "GuardControllerDegraded: Missing PodIP in operand etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 on node ci-op-9xx71rvq-1e28e-w667k-master-2" | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" | |
openshift-etcd |
static-pod-installer |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" in 3.386s (3.386s including waiting) | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcd | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-1_497e5fa5-6f51-4fcf-bb53-e0dd23b84394 became leader | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-p5qtd |
Killing |
Stopping container openshift-apiserver | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-6 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-p5qtd |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-6 -n openshift-kube-controller-manager because it was missing | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-2 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-2 in Controller | |
openshift-apiserver |
endpoint-controller |
api |
FailedToUpdateEndpoint |
Failed to update endpoint openshift-apiserver/api: Operation cannot be fulfilled on endpoints "api": the object has been modified; please apply your changes to the latest version and try again | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-7c577f45d7 to 2 from 1 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-7847c9d86c to 1 from 2 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-6 -n openshift-kube-controller-manager because it was missing | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-1 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-1 in Controller | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerOK |
found expected kube-apiserver endpoints | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodCreated |
Created Pod/etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver |
replicaset-controller |
apiserver-7c577f45d7 |
SuccessfulCreate |
Created pod: apiserver-7c577f45d7-jlktw | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-apiserver |
replicaset-controller |
apiserver-7847c9d86c |
SuccessfulDelete |
Deleted pod: apiserver-7847c9d86c-p5qtd | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-6 -n openshift-kube-controller-manager because it was missing | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-0 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-0 in Controller | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 4 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberAddAsLearner |
successfully added new member https://10.0.0.7:2380 | |
openshift-apiserver |
endpoint-controller |
check-endpoints |
FailedToUpdateEndpoint |
Failed to update endpoint openshift-apiserver/check-endpoints: Operation cannot be fulfilled on endpoints "check-endpoints": the object has been modified; please apply your changes to the latest version and try again | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/cloud-config-4 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container guard | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container guard | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
multus |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.39/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionCreate |
Revision 6 created because required secret/localhost-recovery-client-token has changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberPromote |
successfully promoted learner member https://10.0.0.7:2380 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 5" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 5; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 5" to "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 5; 0 nodes have achieved new revision 6" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-6-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
multus |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.60/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-scheduler |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreate |
Revision 6 created because required secret/localhost-recovery-client-token has changed | |
openshift-kube-scheduler |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-2_bb5c0099-d5e3-4fe8-8657-f9f1bf97ce1b became leader | |
openshift-kube-scheduler |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container installer | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.16.0-0.nightly-2024-06-10-211334" image="registry.build02.ci.openshift.org/ci-op-9xx71rvq/release@sha256:65102daae8065dffb1c67481ff030f5ad71eab5a7335d2498348a84cb5189074" | |
openshift-kube-controller-manager |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Killing |
Stopping container installer | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodUpdated |
Updated Pod/etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-etcd because it changed | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.16.0-0.nightly-2024-06-10-211334" image="registry.build02.ci.openshift.org/ci-op-9xx71rvq/release@sha256:65102daae8065dffb1c67481ff030f5ad71eab5a7335d2498348a84cb5189074" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 2 nodes are at revision 5" to "NodeInstallerProgressing: 1 node is at revision 0; 2 nodes are at revision 5; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 5" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 5; 0 nodes have achieved new revision 6" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.16.0-0.nightly-2024-06-10-211334" image="registry.build02.ci.openshift.org/ci-op-9xx71rvq/release@sha256:65102daae8065dffb1c67481ff030f5ad71eab5a7335d2498348a84cb5189074" architecture="amd64" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-6-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
multus |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.40/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container installer | |
openshift-kube-controller-manager |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 4 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreate |
Revision 4 created because required secret/localhost-recovery-client-token has changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 5 triggered by "required configmap/sa-token-signing-certs has changed" | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-7847c9d86c-p5qtd |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
| (x5) | default |
machineapioperator |
machine-api |
Status degraded |
minimum worker replica count (2) not yet met: current running replicas 0, waiting for [ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49] |
| (x3) | openshift-apiserver |
kubelet |
apiserver-7847c9d86c-p5qtd |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-5 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: " map[string]any{\n \t\"apiServerArguments\": map[string]any{\n \t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n \t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\t\"etcd-servers\": []any{\n \t\t\tstring(\"https://10.0.0.6:2379\"),\n+ \t\t\tstring(\"https://10.0.0.7:2379\"),\n \t\t\tstring(\"https://10.0.0.8:2379\"),\n \t\t},\n \t\t\"tls-cipher-suites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...},\n \t\t\"tls-min-version\": string(\"VersionTLS12\"),\n \t},\n }\n" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-2" from revision 0 to 5 because static pod is ready | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-f74744fc5 to 1 from 0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 2, desired generation is 3.") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.a4a1160c07133b06 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-f74744fc5 |
SuccessfulCreate |
Created pod: apiserver-f74744fc5-xrzm4 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.6:2379,https://10.0.0.7:2379,https://10.0.0.8:2379,https://localhost:2379 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6777f8cb5c to 2 from 3 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 6 triggered by "required configmap/etcd-endpoints has changed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 3; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 3; 0 nodes have achieved new revision 5\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5\nEtcdMembersAvailable: 3 members are available" | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6777f8cb5c |
SuccessfulDelete |
Deleted pod: apiserver-6777f8cb5c-jj8xw | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.6:2379,https://10.0.0.7:2379,https://10.0.0.8:2379 | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-oauthapiservercontrollerworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ ... // 2 identical entries "authentication-token-webhook-version": []any{string("v1")}, "cloud-config": []any{string("/etc/kubernetes/static-pod-resources/configmaps/cloud-config/clo"...)}, "etcd-servers": []any{ string("https://10.0.0.6:2379"), + string("https://10.0.0.7:2379"), string("https://10.0.0.8:2379"), string("https://localhost:2379"), }, "feature-gates": []any{string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AutomatedEtcdBackup=false"), string("AzureWorkloadIdentity=true"), ...}, "send-retry-after-while-not-ready-once": []any{string("false")}, ... // 2 identical entries }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, } | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-jj8xw |
Killing |
Stopping container oauth-apiserver | |
openshift-apiserver |
multus |
apiserver-7c577f45d7-jlktw |
AddedInterface |
Add eth0 [10.128.0.39/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-7c577f45d7-jlktw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-apiserver |
kubelet |
apiserver-7c577f45d7-jlktw |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-7c577f45d7-jlktw |
Created |
Created container fix-audit-permissions | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
static-pod-installer |
installer-3-ci-op-9xx71rvq-1e28e-w667k-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 3 | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 4" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-6 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/cloud-config-5 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-7c577f45d7-jlktw |
Started |
Started container openshift-apiserver |
| (x2) | openshift-apiserver |
kubelet |
apiserver-7c577f45d7-jlktw |
Created |
Created container openshift-apiserver-check-endpoints |
| (x2) | openshift-apiserver |
kubelet |
apiserver-7c577f45d7-jlktw |
Created |
Created container openshift-apiserver |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-6 -n openshift-etcd because it was missing | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-7c577f45d7-jlktw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine |
| (x2) | openshift-apiserver |
kubelet |
apiserver-7c577f45d7-jlktw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-peer-client-ca-6 -n openshift-etcd because it was missing | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-7c577f45d7-jlktw |
Started |
Started container openshift-apiserver-check-endpoints |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-serving-ca-6 -n openshift-etcd because it was missing | |
| (x2) | openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ ... // 2 identical entries "routingConfig": map[string]any{"subdomain": string("apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com")}, "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), ...}, "minTLSVersion": string("VersionTLS12")}, "storageConfig": map[string]any{ "urls": []any{ string("https://10.0.0.6:2379"), + string("https://10.0.0.7:2379"), string("https://10.0.0.8:2379"), }, }, } |
| (x2) | openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.6:2379,https://10.0.0.7:2379,https://10.0.0.8:2379 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-client-ca-6 -n openshift-etcd because it was missing | |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-jj8xw |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-5 -n openshift-kube-apiserver because it was missing | |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-jj8xw |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-6 -n openshift-etcd because it was missing | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-78d6c6c648 to 1 from 0 | |
| (x4) | openshift-apiserver |
kubelet |
apiserver-7c577f45d7-jlktw |
BackOff |
Back-off restarting failed container openshift-apiserver in pod apiserver-7c577f45d7-jlktw_openshift-apiserver(b504a9f0-bcc4-4425-8527-13c3ec67d80d) |
| (x4) | openshift-apiserver |
kubelet |
apiserver-7c577f45d7-jlktw |
BackOff |
Back-off restarting failed container openshift-apiserver-check-endpoints in pod apiserver-7c577f45d7-jlktw_openshift-apiserver(b504a9f0-bcc4-4425-8527-13c3ec67d80d) |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-apiserver |
replicaset-controller |
apiserver-7c577f45d7 |
SuccessfulDelete |
Deleted pod: apiserver-7c577f45d7-jlktw | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-7c577f45d7 to 1 from 2 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 4, desired generation is 5." | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 4, desired generation is 5." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 4, desired generation is 5." | |
openshift-apiserver |
replicaset-controller |
apiserver-78d6c6c648 |
SuccessfulCreate |
Created pod: apiserver-78d6c6c648-d7kss | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-6 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionTriggered |
new revision 6 triggered by "required configmap/etcd-endpoints has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-4-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionCreate |
Revision 6 created because required configmap/etcd-endpoints has changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 7 triggered by "required configmap/etcd-pod has changed" | |
openshift-kube-apiserver |
multus |
installer-4-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.40/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-d7kss |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-d7kss |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
multus |
apiserver-78d6c6c648-d7kss |
AddedInterface |
Add eth0 [10.128.0.41/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-d7kss |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-4-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-4-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-4-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-jj8xw |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.38:8443/readyz": dial tcp 10.128.0.38:8443: connect: connection refused | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-5 -n openshift-kube-apiserver because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-jj8xw |
ProbeError |
Readiness probe error: Get "https://10.128.0.38:8443/readyz": dial tcp 10.128.0.38:8443: connect: connection refused body: | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 4, desired generation is 5." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3." | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-7 -n openshift-etcd because it was missing | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-d7kss |
Started |
Started container openshift-apiserver |
| (x2) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-d7kss |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-5 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-7 -n openshift-etcd because it was missing | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 1 to 5 because node ci-op-9xx71rvq-1e28e-w667k-master-0 with revision 1 is the oldest |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5" to "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5; 0 nodes have achieved new revision 6\nEtcdMembersAvailable: 3 members are available" | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-d7kss |
Created |
Created container openshift-apiserver-check-endpoints |
| (x2) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-d7kss |
Started |
Started container openshift-apiserver-check-endpoints |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-peer-client-ca-7 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-serving-ca-7 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
static-pod-installer |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container kube-scheduler | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-client-ca-7 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-6-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-etcd because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-xrzm4 |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-xrzm4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" already present on machine | |
openshift-oauth-apiserver |
multus |
apiserver-f74744fc5-xrzm4 |
AddedInterface |
Add eth0 [10.128.0.42/23] from ovn-kubernetes | |
| (x5) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-d7kss |
BackOff |
Back-off restarting failed container openshift-apiserver in pod apiserver-78d6c6c648-d7kss_openshift-apiserver(fca3bc13-cb7f-4a88-88cc-cc435f4cab54) |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-7 -n openshift-etcd because it was missing | |
openshift-etcd |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
multus |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.43/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-xrzm4 |
Started |
Started container fix-audit-permissions | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-d7kss |
BackOff |
Back-off restarting failed container openshift-apiserver-check-endpoints in pod apiserver-78d6c6c648-d7kss_openshift-apiserver(fca3bc13-cb7f-4a88-88cc-cc435f4cab54) |
openshift-etcd |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-5 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-7 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 5 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 6 triggered by "required configmap/config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreate |
Revision 5 created because required configmap/sa-token-signing-certs has changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionCreate |
Revision 7 created because required configmap/etcd-pod has changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5; 0 nodes have achieved new revision 7",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5; 0 nodes have achieved new revision 6\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5; 0 nodes have achieved new revision 7\nEtcdMembersAvailable: 3 members are available" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionTriggered |
new revision 7 triggered by "required configmap/etcd-pod has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-6 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 5" | |
| (x4) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
ProbeError |
Readiness probe error: Get "https://10.0.0.6:10259/healthz": dial tcp 10.0.0.6:10259: connect: connection refused body: |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-6 -n openshift-kube-apiserver because it was missing | |
| (x4) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.6:10259/healthz": dial tcp 10.0.0.6:10259: connect: connection refused |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/cloud-config-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-d7kss |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
multus |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-etcd because it was missing | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-d7kss |
Created |
Created container openshift-apiserver |
openshift-etcd |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
static-pod-installer |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-kube-apiserver |
kubelet |
installer-4-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container installer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2" to "GuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 on node ci-op-9xx71rvq-1e28e-w667k-master-2" | |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreateFailed |
Failed to create revision 6: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps": dial tcp 172.30.0.1:443: connect: connection refused | |
default |
apiserver |
openshift-kube-apiserver |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-xrzm4 |
ProbeError |
Startup probe error: Get "https://10.128.0.42:8443/healthz": dial tcp 10.128.0.42:8443: connect: connection refused body: | |
openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-xrzm4 |
Unhealthy |
Startup probe failed: Get "https://10.128.0.42:8443/healthz": dial tcp 10.128.0.42:8443: connect: connection refused | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" in 10.234s (10.234s including waiting) | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89206cb191ea89871d18b482edd9417d13327fab7091ed43293046345c80c3d7" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89206cb191ea89871d18b482edd9417d13327fab7091ed43293046345c80c3d7" in 2.868s (2.868s including waiting) | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container kube-controller-manager-cert-syncer | |
| (x11) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-xrzm4 |
Created |
Created container oauth-apiserver |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-xrzm4 |
Started |
Started container oauth-apiserver |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-xrzm4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" already present on machine |
| (x8) | openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-xrzm4 |
BackOff |
Back-off restarting failed container oauth-apiserver in pod apiserver-f74744fc5-xrzm4_openshift-oauth-apiserver(89e97185-b8d4-4311-b124-2a3b01cf4387) |
openshift-multus |
node-controller |
multus-nr9x6 |
NodeNotReady |
Node is not ready | |
openshift-etcd |
node-controller |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
NodeNotReady |
Node is not ready | |
openshift-kube-apiserver |
node-controller |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
NodeNotReady |
Node is not ready | |
openshift-controller-manager |
node-controller |
controller-manager-7cfc668fc8-d2fkd |
NodeNotReady |
Node is not ready | |
openshift-kube-controller-manager |
node-controller |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
NodeNotReady |
Node is not ready | |
openshift-machine-config-operator |
node-controller |
machine-config-server-lh4sp |
NodeNotReady |
Node is not ready | |
openshift-network-operator |
node-controller |
iptables-alerter-j88xk |
NodeNotReady |
Node is not ready | |
openshift-kube-controller-manager |
node-controller |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
NodeNotReady |
Node is not ready | |
openshift-cluster-storage-operator |
node-controller |
csi-snapshot-webhook-5b799d8d59-kj7nt |
NodeNotReady |
Node is not ready | |
openshift-cluster-csi-drivers |
node-controller |
azure-disk-csi-driver-operator-7fcb8db8c9-bmkwq |
NodeNotReady |
Node is not ready | |
openshift-cluster-csi-drivers |
node-controller |
azure-disk-csi-driver-node-fzdwd |
NodeNotReady |
Node is not ready | |
openshift-cluster-csi-drivers |
node-controller |
azure-disk-csi-driver-controller-6d9996db94-b8cs5 |
NodeNotReady |
Node is not ready | |
openshift-dns |
node-controller |
dns-default-tfrnn |
NodeNotReady |
Node is not ready | |
openshift-network-diagnostics |
node-controller |
network-check-target-fmdsm |
NodeNotReady |
Node is not ready | |
openshift-network-node-identity |
node-controller |
network-node-identity-gs6c8 |
NodeNotReady |
Node is not ready | |
openshift-cluster-storage-operator |
node-controller |
csi-snapshot-controller-5677697b57-np5dk |
NodeNotReady |
Node is not ready | |
openshift-machine-config-operator |
node-controller |
machine-config-daemon-f5p8t |
NodeNotReady |
Node is not ready | |
openshift-cloud-controller-manager |
node-controller |
azure-cloud-node-manager-njdg9 |
NodeNotReady |
Node is not ready | |
openshift-kube-scheduler |
node-controller |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
NodeNotReady |
Node is not ready | |
openshift-cluster-csi-drivers |
node-controller |
azure-file-csi-driver-node-b6dqs |
NodeNotReady |
Node is not ready | |
openshift-cloud-controller-manager |
node-controller |
azure-cloud-controller-manager-ccfbdcbbd-dxwmk |
NodeNotReady |
Node is not ready | |
openshift-machine-config-operator |
node-controller |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-master-0 |
NodeNotReady |
Node is not ready | |
openshift-etcd |
node-controller |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
NodeNotReady |
Node is not ready | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
NodeNotReady |
Node ci-op-9xx71rvq-1e28e-w667k-master-0 status is now: NodeNotReady | |
openshift-cluster-node-tuning-operator |
node-controller |
tuned-lbfm2 |
NodeNotReady |
Node is not ready | |
openshift-multus |
node-controller |
multus-additional-cni-plugins-xj48s |
NodeNotReady |
Node is not ready | |
openshift-etcd |
node-controller |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
NodeNotReady |
Node is not ready | |
openshift-ovn-kubernetes |
node-controller |
ovnkube-control-plane-5df5bbb869-7dsfz |
NodeNotReady |
Node is not ready | |
openshift-operator-lifecycle-manager |
node-controller |
packageserver-687cc5c899-cclnt |
NodeNotReady |
Node is not ready | |
openshift-dns |
node-controller |
node-resolver-p2bm7 |
NodeNotReady |
Node is not ready | |
openshift-cluster-csi-drivers |
node-controller |
azure-file-csi-driver-operator-66b9ff7945-fpvl2 |
NodeNotReady |
Node is not ready | |
openshift-kube-apiserver |
node-controller |
apiserver-watcher-ci-op-9xx71rvq-1e28e-w667k-master-0 |
NodeNotReady |
Node is not ready | |
openshift-ovn-kubernetes |
node-controller |
ovnkube-node-rxzbs |
NodeNotReady |
Node is not ready | |
openshift-route-controller-manager |
node-controller |
route-controller-manager-6d7d8b6854-qjgq9 |
NodeNotReady |
Node is not ready | |
openshift-multus |
node-controller |
network-metrics-daemon-jttv4 |
NodeNotReady |
Node is not ready | |
openshift-cluster-csi-drivers |
node-controller |
azure-file-csi-driver-controller-7bf87ccd87-xb66l |
NodeNotReady |
Node is not ready | |
openshift-kube-scheduler |
node-controller |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
NodeNotReady |
Node is not ready | |
| (x7) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
ProbeError |
Readiness probe error: Get "https://10.0.0.6:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x7) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.6:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Unhealthy |
Startup probe failed: Get "https://10.0.0.7:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Killing |
Container cluster-policy-controller failed startup probe, will be restarted | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89206cb191ea89871d18b482edd9417d13327fab7091ed43293046345c80c3d7" already present on machine | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container cluster-policy-controller |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container cluster-policy-controller |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-controller-manager |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-controller-manager |
| (x4) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
ProbeError |
Startup probe error: Get "https://10.0.0.7:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.6:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
ProbeError |
Liveness probe error: Get "https://10.0.0.6:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
ProbeError |
Readiness probe error: Get "https://10.0.0.6:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Unhealthy |
Liveness probe failed: Get "https://10.0.0.6:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
ProbeError |
Startup probe error: Get "https://10.0.0.6:10257/healthz": dial tcp 10.0.0.6:10257: connect: connection refused body: |
openshift-marketplace |
kubelet |
marketplace-operator-867c6b6ccc-rmltl |
ProbeError |
Liveness probe error: Get "http://10.129.0.21:8080/healthz": dial tcp 10.129.0.21:8080: connect: connection refused body: | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Container kube-controller-manager failed startup probe, will be restarted | |
openshift-marketplace |
kubelet |
marketplace-operator-867c6b6ccc-rmltl |
Unhealthy |
Liveness probe failed: Get "http://10.129.0.21:8080/healthz": dial tcp 10.129.0.21:8080: connect: connection refused | |
openshift-marketplace |
kubelet |
marketplace-operator-867c6b6ccc-rmltl |
ProbeError |
Readiness probe error: Get "http://10.129.0.21:8080/healthz": dial tcp 10.129.0.21:8080: connect: connection refused body: | |
openshift-marketplace |
kubelet |
marketplace-operator-867c6b6ccc-rmltl |
Unhealthy |
Readiness probe failed: Get "http://10.129.0.21:8080/healthz": dial tcp 10.129.0.21:8080: connect: connection refused | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Unhealthy |
Startup probe failed: Get "https://10.0.0.6:10257/healthz": dial tcp 10.0.0.6:10257: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container setup | |
| (x13) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
Failed to create installer pod for revision 5 count 0 on node "ci-op-9xx71rvq-1e28e-w667k-master-0": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0": dial tcp 172.30.0.1:443: connect: connection refused |
| (x15) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
| (x15) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
EtcdMembersErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
| (x15) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
InstallerPodFailed |
Failed to create installer pod for revision 7 count 0 on node "ci-op-9xx71rvq-1e28e-w667k-master-0": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0": dial tcp 172.30.0.1:443: connect: connection refused |
| (x14) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
Failed to create installer pod for revision 6 count 0 on node "ci-op-9xx71rvq-1e28e-w667k-master-2": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-6-ci-op-9xx71rvq-1e28e-w667k-master-2": dial tcp 172.30.0.1:443: connect: connection refused |
| (x11) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.6:10257/healthz": dial tcp 10.0.0.6:10257: connect: connection refused |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-867c6b6ccc-rmltl |
Created |
Created container marketplace-operator |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-867c6b6ccc-rmltl |
Started |
Started container marketplace-operator |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-867c6b6ccc-rmltl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:12b3eec8af6f44826bb42555d0363aa80e03b444efc93f28b44aee68bf6fb109" already present on machine |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
KubeAPIReadyz |
readyz=true | |
| (x22) | default |
kubelet |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
NodeHasSufficientMemory |
Node ci-op-9xx71rvq-1e28e-w667k-master-0 status is now: NodeHasSufficientMemory |
| (x22) | default |
kubelet |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
NodeHasNoDiskPressure |
Node ci-op-9xx71rvq-1e28e-w667k-master-0 status is now: NodeHasNoDiskPressure |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-66bb9945d4-25hsj |
Started |
Started container ingress-operator |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-controller-6d9996db94-26g2j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6db88450d13dc077711534a005852b06eaa6ff38c7fa366f99c53556c42697a1" already present on machine | |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-66bb9945d4-25hsj |
Created |
Created container ingress-operator |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-controller-7bf87ccd87-qcs5n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a3bd19b870f9551af296dce9d947bc273832d50ab86757035355993f59a347c" already present on machine | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-operator-6d64fdfbc-xtlls |
Started |
Started container machine-config-operator |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-operator-6d64fdfbc-xtlls |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06cb5faab03003ec68dedbb23fbbdef0c98eb80ba70affedb7703df613ca31ac" already present on machine |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-2_bb5c0099-d5e3-4fe8-8657-f9f1bf97ce1b stopped leading | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-7df985cbf9-f4swj |
BackOff |
Back-off restarting failed container kube-storage-version-migrator-operator in pod kube-storage-version-migrator-operator-7df985cbf9-f4swj_openshift-kube-storage-version-migrator-operator(0b77f9c5-299a-4420-8697-7f43315721f0) | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6d7d8b6854-9jxht |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5484eee39d22c97ef8b258c63a00940d97593abc951acad7aec3117e1d65019" already present on machine |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-operator-6d64fdfbc-xtlls |
Created |
Created container machine-config-operator |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6d7d8b6854-9jxht |
Created |
Created container route-controller-manager |
| (x2) | openshift-cluster-version |
kubelet |
cluster-version-operator-6fff9b89f6-zgszm |
Created |
Created container cluster-version-operator |
openshift-cluster-version |
kubelet |
cluster-version-operator-6fff9b89f6-zgszm |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-9xx71rvq/release@sha256:65102daae8065dffb1c67481ff030f5ad71eab5a7335d2498348a84cb5189074" already present on machine | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6d7d8b6854-9jxht |
Started |
Started container route-controller-manager |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5bbb869-x5nhm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5bbb869-x5nhm |
Created |
Created container ovnkube-cluster-manager |
| (x2) | openshift-cluster-version |
kubelet |
cluster-version-operator-6fff9b89f6-zgszm |
Started |
Started container cluster-version-operator |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5bbb869-x5nhm |
Started |
Started container ovnkube-cluster-manager |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-2_cc899b7a-d221-4dd9-91bc-64b2c44fcfb3 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.16.0-0.nightly-2024-06-10-211334" image="registry.build02.ci.openshift.org/ci-op-9xx71rvq/release@sha256:65102daae8065dffb1c67481ff030f5ad71eab5a7335d2498348a84cb5189074" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.16.0-0.nightly-2024-06-10-211334" image="registry.build02.ci.openshift.org/ci-op-9xx71rvq/release@sha256:65102daae8065dffb1c67481ff030f5ad71eab5a7335d2498348a84cb5189074" | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.16.0-0.nightly-2024-06-10-211334" image="registry.build02.ci.openshift.org/ci-op-9xx71rvq/release@sha256:65102daae8065dffb1c67481ff030f5ad71eab5a7335d2498348a84cb5189074" architecture="amd64" | |
| (x2) | openshift-cloud-network-config-controller |
kubelet |
cloud-network-config-controller-56cffd86cf-c4tcz |
Started |
Started container controller |
openshift-cloud-network-config-controller |
kubelet |
cloud-network-config-controller-56cffd86cf-c4tcz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fbaae7684d6ac205ebd327f527be846cf3dce959ab41648405ab5d6b20e03fd" already present on machine | |
| (x2) | openshift-cloud-network-config-controller |
kubelet |
cloud-network-config-controller-56cffd86cf-c4tcz |
Created |
Created container controller |
openshift-ovn-kubernetes |
controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-5df5bbb869-x5nhm became leader | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-74bf5c6c66-mlzgt |
BackOff |
Back-off restarting failed container cluster-storage-operator in pod cluster-storage-operator-74bf5c6c66-mlzgt_openshift-cluster-storage-operator(34aca7a8-b503-40fa-8dbf-13aaf05afe7e) | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapUpdated |
Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt | |
| (x7) | openshift-insights |
kubelet |
insights-operator-6c5c749b84-s7zkf |
BackOff |
Back-off restarting failed container insights-operator in pod insights-operator-6c5c749b84-s7zkf_openshift-insights(d677751e-aea9-47bd-bdb4-87084ad90c2b) |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f96580d79cef3954a20bcbe62a91f0cafbb3d90ece402e9dc77f02bd013c9bd1" already present on machine | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
| (x2) | openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Created |
Created container machine-controller |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-machine-api |
kubelet |
machine-api-controllers-857c68d88f-cpdp9 |
Started |
Started container machine-controller |
openshift-machine-api |
machine-api-provider-azure |
machine-api-controllers |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-ccfbdcbbd-dxwmk |
Created |
Created container cloud-controller-manager |
openshift-kube-apiserver |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0_openshift-kube-apiserver_905d4fa5-2b71-4c27-8e52-7fc8facd59f7_0(b614ba99f08493ed028d3433aeec9cfbdf064a7954463186cd65adb4c1c8b7a7): error adding pod openshift-kube-apiserver_installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b614ba99f08493ed028d3433aeec9cfbdf064a7954463186cd65adb4c1c8b7a7" Netns:"/var/run/netns/0a5f8117-d5fc-47d0-b757-7e8bbce062b9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0;K8S_POD_INFRA_CONTAINER_ID=b614ba99f08493ed028d3433aeec9cfbdf064a7954463186cd65adb4c1c8b7a7;K8S_POD_UID=905d4fa5-2b71-4c27-8e52-7fc8facd59f7" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0] networking: Multus: [openshift-kube-apiserver/installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0/905d4fa5-2b71-4c27-8e52-7fc8facd59f7]: error waiting for pod: pod "installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-ccfbdcbbd-dxwmk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ce4fcfefebbc59a93cb599fbecd9dfdc61aca056610ba34247b5c8e1934dfaa" already present on machine | |
| (x2) | openshift-cloud-controller-manager |
kubelet |
azure-cloud-controller-manager-ccfbdcbbd-dxwmk |
Started |
Started container cloud-controller-manager |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
azure-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-operator-66b9ff7945-fpvl2 |
Created |
Created container azure-file-csi-driver-operator |
| (x3) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-7df985cbf9-f4swj |
Created |
Created container kube-storage-version-migrator-operator |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-operator-66b9ff7945-fpvl2 |
Started |
Started container azure-file-csi-driver-operator |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-7df985cbf9-f4swj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2217f372554ab69fda40095c92140fd60b05035749446270d5acabc18b956a9b" already present on machine |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6cf975b6c8-zdsgh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-operator-66b9ff7945-fpvl2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d132ab75ab591682220976b04e6e82e37482fa971fd9e3576f8f144095897eec" already present on machine | |
| (x3) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-7df985cbf9-f4swj |
Started |
Started container kube-storage-version-migrator-operator |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-operator-7fcb8db8c9-bmkwq |
Started |
Started container azure-disk-csi-driver-operator |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-operator-7fcb8db8c9-bmkwq |
Created |
Created container azure-disk-csi-driver-operator |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-operator-7fcb8db8c9-bmkwq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b7fcff180a9d703eaff4eed0aaa4879bc21f6ff1f39c55f4836a2a135eb5da44" already present on machine | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-0_c86093bd-51c1-4a3a-94b6-b0d50bbe0904 became leader | |
openshift-controller-manager |
kubelet |
controller-manager-7cfc668fc8-d2fkd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36405aaf37dd3a4676764e25cebf2d0832944a3b96cc5c3b93ec896d0af969f3" already present on machine | |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-7cfc668fc8-d2fkd |
Created |
Created container controller-manager |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-7cfc668fc8-d2fkd |
Started |
Started container controller-manager |
openshift-kube-apiserver |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
| (x2) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-74bf5c6c66-mlzgt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e805f1ea4410781909560e7065cbb4d7ea50ca32b91b98e16f31216290bfc2a3" already present on machine |
| (x3) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-74bf5c6c66-mlzgt |
Started |
Started container cluster-storage-operator |
| (x3) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-74bf5c6c66-mlzgt |
Created |
Created container cluster-storage-operator |
openshift-kube-apiserver |
multus |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.45/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing changed from False to True ("AzureDiskProgressing: Waiting for Deployment to deploy pods") | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-74bf5c6c66-mlzgt_fde9e98b-0d2a-48a3-adf7-d3a29ba7dcaf became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "AzureDiskProgressing: Waiting for Deployment to deploy pods" to "AzureDiskProgressing: Waiting for Deployment to deploy pods\nAzureFileProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-apiserver |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
| (x5) | openshift-insights |
kubelet |
insights-operator-6c5c749b84-s7zkf |
Created |
Created container insights-operator |
| (x5) | openshift-insights |
kubelet |
insights-operator-6c5c749b84-s7zkf |
Started |
Started container insights-operator |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-1_688d4805-8b97-4d1f-845c-5d45b121e487 became leader | |
| (x4) | openshift-insights |
kubelet |
insights-operator-6c5c749b84-s7zkf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eade372bea1974bf9b2e7fefd818ff900b0c6b1ff4b80107fc3f378b95861420" already present on machine |
openshift-cluster-csi-drivers |
daemonset-controller |
azure-disk-csi-driver-node |
SuccessfulCreate |
Created pod: azure-disk-csi-driver-node-qmbvr | |
openshift-cluster-csi-drivers |
daemonset-controller |
azure-file-csi-driver-node |
SuccessfulCreate |
Created pod: azure-file-csi-driver-node-gz7kd | |
openshift-cloud-controller-manager |
daemonset-controller |
azure-cloud-node-manager |
SuccessfulCreate |
Created pod: azure-cloud-node-manager-b7mbg | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "AzureDiskProgressing: Waiting for Deployment to deploy pods\nAzureFileProgressing: Waiting for Deployment to deploy pods" to "AzureDiskProgressing: Waiting for Deployment to deploy pods" | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-2 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-2 in Controller | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-7wq8n | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-ctlcc | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-0 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-0 in Controller | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-p487g | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-1 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-1 in Controller | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing changed from True to False ("AzureDiskCSIDriverOperatorCRProgressing: All is well\nAzureFileCSIDriverOperatorCRProgressing: All is well") | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-r82gp | |
openshift-cloud-controller-manager |
daemonset-controller |
azure-cloud-node-manager |
SuccessfulCreate |
Created pod: azure-cloud-node-manager-p48ld | |
openshift-cluster-csi-drivers |
daemonset-controller |
azure-file-csi-driver-node |
SuccessfulCreate |
Created pod: azure-file-csi-driver-node-mft7l | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-9pnbf | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-l8bk2 | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-8xrbm | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-tnm8w | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-4hhxq | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-k2wml | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-cnwtn | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-8qg9z | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-p98p7 | |
openshift-cluster-csi-drivers |
daemonset-controller |
azure-disk-csi-driver-node |
SuccessfulCreate |
Created pod: azure-disk-csi-driver-node-6wk8q | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-p4qhk | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-mgs54 | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-7hlr6 | |
openshift-cluster-machine-approver |
ci-op-9xx71rvq-1e28e-w667k-master-1_7a783c16-5264-48ef-a950-dfdf347437dc |
cluster-machine-approver-leader |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-1_7a783c16-5264-48ef-a950-dfdf347437dc became leader | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 in Controller | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp event: Registered Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp in Controller | |
openshift-cluster-csi-drivers |
external-resizer-file-csi-azure-com/azure-file-csi-driver-controller-7bf87ccd87-xb66l |
external-resizer-file-csi-azure-com |
LeaderElection |
azure-file-csi-driver-controller-7bf87ccd87-xb66l became leader | |
openshift-cloud-controller-manager |
daemonset-controller |
azure-cloud-node-manager |
SuccessfulCreate |
Created pod: azure-cloud-node-manager-t6wgr | |
openshift-cluster-csi-drivers |
daemonset-controller |
azure-disk-csi-driver-node |
SuccessfulCreate |
Created pod: azure-disk-csi-driver-node-mv6v5 | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-xcz98 | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-fh4k2 | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-4gxw6 | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-qs9t5 | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-lxhxn | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-xjnf6 | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-qp2gp | |
openshift-cluster-csi-drivers |
daemonset-controller |
azure-file-csi-driver-node |
SuccessfulCreate |
Created pod: azure-file-csi-driver-node-qgwhz | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-w65vj | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b09fd8c20080a440e2fb91e64deed04b5a8678296f0376dfa2f2908941b5309a" | |
openshift-dns |
kubelet |
node-resolver-qs9t5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" | |
openshift-multus |
kubelet |
multus-4gxw6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-lxhxn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5a0b342d2946d03911c22f02d11d555d9c3650769380e160f0628ff97bd9f8" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-xjnf6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06cb5faab03003ec68dedbb23fbbdef0c98eb80ba70affedb7703df613ca31ac" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-t6wgr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-xjnf6 |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-xjnf6 |
Created |
Created container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-xjnf6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-xjnf6 |
Started |
Started container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-xjnf6 |
Created |
Created container machine-config-daemon | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-0_ab0be4fe-0720-4b40-aca0-28e502270f03 became leader | |
| (x5) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
Created |
Created container kube-rbac-proxy-crio |
openshift-machine-config-operator |
machine-config-operator |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 in Controller | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-p4qhk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06cb5faab03003ec68dedbb23fbbdef0c98eb80ba70affedb7703df613ca31ac" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-p48ld |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-k2wml |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5a0b342d2946d03911c22f02d11d555d9c3650769380e160f0628ff97bd9f8" | |
openshift-multus |
kubelet |
multus-r82gp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" | |
openshift-dns |
kubelet |
node-resolver-l8bk2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b09fd8c20080a440e2fb91e64deed04b5a8678296f0376dfa2f2908941b5309a" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-p4qhk |
Created |
Created container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-p4qhk |
Created |
Created container kube-rbac-proxy | |
openshift-cluster-csi-drivers |
external-snapshotter-leader-disk.csi.azure.com/azure-disk-csi-driver-controller-6d9996db94-26g2j |
external-snapshotter-leader-disk-csi-azure-com |
LeaderElection |
azure-disk-csi-driver-controller-6d9996db94-26g2j became leader | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-p4qhk |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-p4qhk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-p4qhk |
Started |
Started container machine-config-daemon | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-p487g |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5a0b342d2946d03911c22f02d11d555d9c3650769380e160f0628ff97bd9f8" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" | |
openshift-machine-config-operator |
machine-config-operator |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-dns |
kubelet |
node-resolver-7wq8n |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" | |
openshift-multus |
kubelet |
multus-7hlr6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-ctlcc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06cb5faab03003ec68dedbb23fbbdef0c98eb80ba70affedb7703df613ca31ac" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-ctlcc |
Created |
Created container machine-config-daemon | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b09fd8c20080a440e2fb91e64deed04b5a8678296f0376dfa2f2908941b5309a" | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-b7mbg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-ctlcc |
Started |
Started container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-ctlcc |
Created |
Created container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-ctlcc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-ctlcc |
Started |
Started container kube-rbac-proxy | |
| (x5) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine |
openshift-machine-config-operator |
machine-config-operator |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-apiserver |
replicaset-controller |
apiserver-7847c9d86c |
SuccessfulDelete |
Deleted pod: apiserver-7847c9d86c-tzr6j | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
Started |
Started container kube-rbac-proxy-crio |
openshift-cloud-controller-manager |
cloud-controller-manager |
cloud-controller-manager |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-1_7165c947-c479-4581-8f85-7244bed90ae3 became leader | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
Created |
Created container kube-rbac-proxy-crio |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-7847c9d86c to 0 from 1 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-78d6c6c648 to 2 from 1 | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-6d7d8b6854-qjgq9_aabe98c8-ab49-4d3a-be7a-ac043426877c became leader | |
openshift-apiserver |
replicaset-controller |
apiserver-78d6c6c648 |
SuccessfulCreate |
Created pod: apiserver-78d6c6c648-zwlsw | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-tzr6j |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-7847c9d86c-tzr6j |
Killing |
Stopping container openshift-apiserver | |
openshift-ingress |
service-controller |
router-default |
EnsuringLoadBalancer |
Ensuring load balancer | |
openshift-ingress |
service-controller |
router-default |
EnsuredLoadBalancer |
Ensured load balancer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
static-pod-installer |
installer-5-ci-op-9xx71rvq-1e28e-w667k-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6777f8cb5c |
SuccessfulDelete |
Deleted pod: apiserver-6777f8cb5c-bcmz4 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-f74744fc5 |
SuccessfulCreate |
Created pod: apiserver-f74744fc5-czt9k | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-7847c9d86c-tzr6j |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x3) | openshift-apiserver |
kubelet |
apiserver-7847c9d86c-tzr6j |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation" |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b09fd8c20080a440e2fb91e64deed04b5a8678296f0376dfa2f2908941b5309a" in 16.767s (16.767s including waiting) | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-bcmz4 |
Killing |
Stopping container oauth-apiserver | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-f74744fc5 to 2 from 1 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6777f8cb5c to 1 from 2 | |
| (x7) | openshift-network-diagnostics |
kubelet |
network-check-target-qp2gp |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-fb2fv" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-xcz98 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-1" from revision 0 to 6 because static pod is ready | |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-xcz98 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x18) | openshift-network-diagnostics |
kubelet |
network-check-target-qp2gp |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-apiserver |
multus |
apiserver-78d6c6c648-zwlsw |
AddedInterface |
Add eth0 [10.129.0.61/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-zwlsw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-zwlsw |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-zwlsw |
Created |
Created container fix-audit-permissions | |
| (x9) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
install strategy completed with no errors |
| (x2) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-zwlsw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-2" from revision 0 to 6 because static pod is ready | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-zwlsw |
Created |
Created container openshift-apiserver |
| (x2) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-zwlsw |
Started |
Started container openshift-apiserver |
| (x2) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-zwlsw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine |
| (x2) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-zwlsw |
Created |
Created container openshift-apiserver-check-endpoints |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 2 nodes are at revision 5; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 2 nodes are at revision 5; 1 node is at revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 5; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 node is at revision 6" | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-zwlsw |
Started |
Started container openshift-apiserver-check-endpoints |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: kube-apiserver-audit-policies-6,sa-token-signing-certs-6, secrets: etcd-client-6,localhost-recovery-client-token-6,localhost-recovery-serving-certkey-6]",Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 6" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Created |
Created container kubecfg-setup | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-t6wgr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" in 39.522s (39.522s including waiting) | |
openshift-dns |
kubelet |
node-resolver-qs9t5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" in 39.432s (39.432s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b09fd8c20080a440e2fb91e64deed04b5a8678296f0376dfa2f2908941b5309a" in 39.567s (39.567s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" in 39.539s (39.539s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" in 39.558s (39.558s including waiting) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-7 -n openshift-kube-apiserver because it was missing | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" in 39.573s (39.573s including waiting) | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-lxhxn |
Started |
Started container tuned | |
openshift-multus |
kubelet |
multus-4gxw6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" in 39.57s (39.57s including waiting) | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-lxhxn |
Created |
Created container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-lxhxn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5a0b342d2946d03911c22f02d11d555d9c3650769380e160f0628ff97bd9f8" in 39.483s (39.483s including waiting) | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-t6wgr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fadc119bc4c8e630b76b0df84e31adb20b5484dcaf8495d0edcfe4288f414546" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Created |
Created container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:443e172a5bba1222249dea114b13e2df0d1b0f7992ef3b774723c8aec78bb522" | |
openshift-dns |
kubelet |
node-resolver-qs9t5 |
Started |
Started container dns-node-resolver | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Created |
Created container azure-inject-credentials | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:70fe518883175c417f736849278c0b614ba907ce768d4f069f9ff16bdcf4b2b7" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Started |
Started container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Created |
Created container azure-inject-credentials | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Started |
Started container kubecfg-setup | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Started |
Started container azure-inject-credentials | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-t6wgr |
Created |
Created container azure-inject-credentials | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-t6wgr |
Started |
Started container azure-inject-credentials | |
openshift-dns |
kubelet |
node-resolver-qs9t5 |
Created |
Created container dns-node-resolver | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Created |
Created container ovn-acl-logging | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-7 -n openshift-kube-apiserver because it was missing | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Started |
Started container nbdb | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-worker-f4cc71d726c1dfbaa9a15a8e0d1198a8 | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49, currentConfig rendered-worker-f4cc71d726c1dfbaa9a15a8e0d1198a8 to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
Uncordon |
Update completed for config rendered-worker-f4cc71d726c1dfbaa9a15a8e0d1198a8 and node has been uncordoned | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Created |
Created container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Created |
Created container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b19f2d14cd886282f9e0307d8d6332af732ffab98ac5322a35a918121f2fad4" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Created |
Created container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Started |
Started container ovn-controller | |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-bcmz4 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-bcmz4 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Created |
Created container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Started |
Started container ovn-acl-logging | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
Uncordon |
Update completed for config rendered-worker-f4cc71d726c1dfbaa9a15a8e0d1198a8 and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp, currentConfig rendered-worker-f4cc71d726c1dfbaa9a15a8e0d1198a8 to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-worker-f4cc71d726c1dfbaa9a15a8e0d1198a8 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/cloud-config-7 -n openshift-kube-apiserver because it was missing | |
| (x6) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-zwlsw |
BackOff |
Back-off restarting failed container openshift-apiserver in pod apiserver-78d6c6c648-zwlsw_openshift-apiserver(1e6dcf58-3858-4c50-a86e-18bbf0ac4fa7) |
| (x4) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-zwlsw |
BackOff |
Back-off restarting failed container openshift-apiserver-check-endpoints in pod apiserver-78d6c6c648-zwlsw_openshift-apiserver(1e6dcf58-3858-4c50-a86e-18bbf0ac4fa7) |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-7 -n openshift-kube-apiserver because it was missing | |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-p98p7 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-7 -n openshift-kube-apiserver because it was missing | |
| (x7) | openshift-network-diagnostics |
kubelet |
network-check-target-mgs54 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-8vvfn" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-7 -n openshift-kube-apiserver because it was missing | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Created |
Created container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodCreated |
Created Pod/kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-controller-manager because it was missing | |
| (x5) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-controller-manager |
multus |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.41/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container guard | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container guard | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-7 -n openshift-kube-apiserver because it was missing | |
| (x18) | openshift-network-diagnostics |
kubelet |
network-check-target-mgs54 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-network-diagnostics |
kubelet |
network-check-target-8qg9z |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-sw2kf" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
| (x18) | openshift-network-diagnostics |
kubelet |
network-check-target-8qg9z |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 5 to 6 because node ci-op-9xx71rvq-1e28e-w667k-master-0 with revision 5 is the oldest | |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-8xrbm |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-p98p7 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-bcmz4 |
Unhealthy |
Readiness probe failed: Get "https://10.129.0.58:8443/readyz": dial tcp 10.129.0.58:8443: connect: connection refused | |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-8xrbm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-7 -n openshift-kube-apiserver because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-bcmz4 |
ProbeError |
Readiness probe error: Get "https://10.129.0.58:8443/readyz": dial tcp 10.129.0.58:8443: connect: connection refused body: | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-7 -n openshift-kube-apiserver because it was missing | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-b7mbg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" in 34.967s (34.967s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" in 35.013s (35.013s including waiting) | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" in 35.014s (35.014s including waiting) | |
openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-czt9k |
Created |
Created container fix-audit-permissions | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" in 34.966s (34.966s including waiting) | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-p487g |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5a0b342d2946d03911c22f02d11d555d9c3650769380e160f0628ff97bd9f8" in 34.934s (34.934s including waiting) | |
| (x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
InstallerPodFailed |
installer errors: installer: 172.30.0.1:443: connect: connection refused W0611 10:54:23.275218 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:54:33.274705 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:54:43.274799 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:54:53.275204 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:55:03.275731 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:55:13.275291 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:55:13.276318 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused F0611 10:55:13.276358 1 cmd.go:105] timed out waiting for the condition |
openshift-oauth-apiserver |
multus |
apiserver-f74744fc5-czt9k |
AddedInterface |
Add eth0 [10.129.0.62/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-czt9k |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" already present on machine | |
openshift-multus |
kubelet |
multus-7hlr6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" in 35s (35s including waiting) | |
openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-czt9k |
Started |
Started container fix-audit-permissions | |
openshift-dns |
kubelet |
node-resolver-7wq8n |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" in 34.9s (34.9s including waiting) | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Started |
Started container kube-rbac-proxy-node | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-p487g |
Created |
Created container tuned | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:443e172a5bba1222249dea114b13e2df0d1b0f7992ef3b774723c8aec78bb522" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Started |
Started container egress-router-binary-copy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Created |
Created container kubecfg-setup | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-b7mbg |
Created |
Created container azure-inject-credentials | |
openshift-dns |
kubelet |
node-resolver-7wq8n |
Created |
Created container dns-node-resolver | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Created |
Created container egress-router-binary-copy | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Created |
Created container azure-inject-credentials | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Started |
Started container kubecfg-setup | |
openshift-dns |
kubelet |
node-resolver-7wq8n |
Started |
Started container dns-node-resolver | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-b7mbg |
Started |
Started container azure-inject-credentials | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-b7mbg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fadc119bc4c8e630b76b0df84e31adb20b5484dcaf8495d0edcfe4288f414546" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Started |
Started container azure-inject-credentials | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-7 -n openshift-kube-apiserver because it was missing | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Created |
Created container ovn-controller | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:70fe518883175c417f736849278c0b614ba907ce768d4f069f9ff16bdcf4b2b7" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Created |
Created container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Created |
Created container azure-inject-credentials | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Created |
Created container kube-rbac-proxy-node | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Started |
Started container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b19f2d14cd886282f9e0307d8d6332af732ffab98ac5322a35a918121f2fad4" | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-p487g |
Started |
Started container tuned | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Created |
Created container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-7 -n openshift-kube-apiserver because it was missing | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Created |
Created container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from False to True ("CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods") | |
| (x2) | openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well") |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b09fd8c20080a440e2fb91e64deed04b5a8678296f0376dfa2f2908941b5309a" in 39.495s (39.495s including waiting) | |
| (x2) | openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" to "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodUpdated |
Updated Pod/kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-controller-manager because it changed | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 5; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 5; 1 node is at revision 6",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 5; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 5; 1 node is at revision 6" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-7 -n openshift-kube-apiserver because it was missing | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: bound-sa-token-signing-certs-7,config-7,etcd-serving-ca-7,kube-apiserver-audit-policies-7,kube-apiserver-cert-syncer-kubeconfig-7,kube-apiserver-pod-7,kubelet-serving-ca-7,sa-token-signing-certs-7 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: configmaps: bound-sa-token-signing-certs-7,config-7,etcd-serving-ca-7,kube-apiserver-audit-policies-7,kube-apiserver-cert-syncer-kubeconfig-7,kube-apiserver-pod-7,kubelet-serving-ca-7,sa-token-signing-certs-7" to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: kube-apiserver-audit-policies-6,sa-token-signing-certs-6, secrets: etcd-client-6,localhost-recovery-client-token-6,localhost-recovery-serving-certkey-6]" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: kube-apiserver-audit-policies-6,sa-token-signing-certs-6, secrets: etcd-client-6,localhost-recovery-client-token-6,localhost-recovery-serving-certkey-6]" to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: configmaps: bound-sa-token-signing-certs-7,config-7,etcd-serving-ca-7,kube-apiserver-audit-policies-7,kube-apiserver-cert-syncer-kubeconfig-7,kube-apiserver-pod-7,kubelet-serving-ca-7,sa-token-signing-certs-7" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreate |
Revision 7 created because configmap "sa-token-signing-certs-6" not found | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 7 triggered by "configmap \"sa-token-signing-certs-6\" not found" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4hhxq |
Created |
Created container sbdb | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-6-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 1 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 3 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: kube-apiserver-audit-policies-6,sa-token-signing-certs-6, secrets: etcd-client-6,localhost-recovery-client-token-6,localhost-recovery-serving-certkey-6]" to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: kube-apiserver-audit-policies-6,sa-token-signing-certs-6, secrets: etcd-client-6,localhost-recovery-client-token-6,localhost-recovery-serving-certkey-6]\nRevisionControllerDegraded: configmap \"revision-status-7\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 1 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-controller-manager |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-2" from revision 0 to 6 because node ci-op-9xx71rvq-1e28e-w667k-master-2 static pod not found | |
openshift-kube-controller-manager |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-kube-controller-manager |
multus |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.46/23] from ovn-kubernetes | |
| (x4) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 7 triggered by "configmap \"sa-token-signing-certs-6\" not found" |
openshift-kube-scheduler |
multus |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.42/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
Uncordon |
Update completed for config rendered-worker-f4cc71d726c1dfbaa9a15a8e0d1198a8 and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9, currentConfig rendered-worker-f4cc71d726c1dfbaa9a15a8e0d1198a8 to Done | |
openshift-kube-scheduler |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-worker-f4cc71d726c1dfbaa9a15a8e0d1198a8 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-6-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-scheduler because it was missing | |
| (x24) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: kube-apiserver-audit-policies-6,sa-token-signing-certs-6, secrets: etcd-client-6,localhost-recovery-client-token-6,localhost-recovery-serving-certkey-6 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: kube-apiserver-audit-policies-6,sa-token-signing-certs-6, secrets: etcd-client-6,localhost-recovery-client-token-6,localhost-recovery-serving-certkey-6]\nRevisionControllerDegraded: configmap \"revision-status-7\" not found" to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: kube-apiserver-audit-policies-6,sa-token-signing-certs-6, secrets: etcd-client-6,localhost-recovery-client-token-6,localhost-recovery-serving-certkey-6]" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" in 49.924s (49.925s including waiting) | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" in 49.932s (49.932s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" in 49.911s (49.911s including waiting) | |
openshift-multus |
kubelet |
multus-r82gp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" in 49.89s (49.89s including waiting) | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-k2wml |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5a0b342d2946d03911c22f02d11d555d9c3650769380e160f0628ff97bd9f8" in 49.895s (49.895s including waiting) | |
openshift-dns |
kubelet |
node-resolver-l8bk2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" in 49.864s (49.864s including waiting) | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-k2wml |
Created |
Created container tuned | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-p48ld |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1bd9232bd59867a84e0c1ce986e4d77e8077d3d01eb3d0b9977ecdcad6a82d38" in 49.839s (49.839s including waiting) | |
openshift-network-node-identity |
ci-op-9xx71rvq-1e28e-w667k-master-1_7e3aa261-807b-4014-9cc3-c6b25e0e1fa4 |
ovnkube-identity |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-1_7e3aa261-807b-4014-9cc3-c6b25e0e1fa4 became leader | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b19f2d14cd886282f9e0307d8d6332af732ffab98ac5322a35a918121f2fad4" in 20.201s (20.201s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Created |
Created container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Started |
Started container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Created |
Created container egress-router-binary-copy | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-k2wml |
Started |
Started container tuned | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-p48ld |
Created |
Created container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:70fe518883175c417f736849278c0b614ba907ce768d4f069f9ff16bdcf4b2b7" in 21.233s (21.233s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Created |
Created container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:443e172a5bba1222249dea114b13e2df0d1b0f7992ef3b774723c8aec78bb522" in 21.288s (21.288s including waiting) | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Created |
Created container kubecfg-setup | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-t6wgr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fadc119bc4c8e630b76b0df84e31adb20b5484dcaf8495d0edcfe4288f414546" in 21.235s (21.235s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Started |
Started container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Created |
Created container csi-driver | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:443e172a5bba1222249dea114b13e2df0d1b0f7992ef3b774723c8aec78bb522" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Started |
Started container egress-router-binary-copy | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Created |
Created container azure-inject-credentials | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Created |
Created container azure-inject-credentials | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-dns |
kubelet |
node-resolver-l8bk2 |
Created |
Created container dns-node-resolver | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492aea82e8accb6e690e9251e98bf5592433f92ca4d3df9bcad7af44a482559d" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Started |
Started container kubecfg-setup | |
openshift-dns |
kubelet |
node-resolver-l8bk2 |
Started |
Started container dns-node-resolver | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:70fe518883175c417f736849278c0b614ba907ce768d4f069f9ff16bdcf4b2b7" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 7:\nNodeInstallerDegraded: installer: 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:54:23.275218 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:54:33.274705 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:54:43.274799 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:54:53.275204 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:55:03.275731 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:55:13.275291 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:55:13.276318 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:55:13.276358 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: \nEtcdMembersDegraded: No unhealthy members found",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5; 0 nodes have achieved new revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5; 0 nodes have achieved new revision 7\nEtcdMembersAvailable: 4 members are available" | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-p48ld |
Started |
Started container azure-inject-credentials | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Started |
Started container ovn-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 7",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 7" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Started |
Started container azure-inject-credentials | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Created |
Created container ovn-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: kube-apiserver-audit-policies-6,sa-token-signing-certs-6, secrets: etcd-client-6,localhost-recovery-client-token-6,localhost-recovery-serving-certkey-6]" to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Started |
Started container azure-inject-credentials | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-p48ld |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fadc119bc4c8e630b76b0df84e31adb20b5484dcaf8495d0edcfe4288f414546" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" in 2.819s (2.819s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Created |
Created container bond-cni-plugin | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Started |
Started container bond-cni-plugin | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Started |
Started container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Created |
Created container ovn-acl-logging | |
openshift-kube-scheduler |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" in 7.719s (7.719s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Created |
Created container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" in 2.732s (2.732s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Created |
Created container csi-node-driver-registrar | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492aea82e8accb6e690e9251e98bf5592433f92ca4d3df9bcad7af44a482559d" in 2.378s (2.379s including waiting) | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Started |
Started container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b19f2d14cd886282f9e0307d8d6332af732ffab98ac5322a35a918121f2fad4" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Created |
Created container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Started |
Started container kube-rbac-proxy-node | |
openshift-kube-scheduler |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container installer | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68a95a354a5bb6c5312ebd4670ae305b8bf0123ed426048ed5befcbfeeff3fda" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-7-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-etcd because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container installer | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-etcd |
multus |
installer-7-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.47/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
installer-7-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
kubelet |
installer-7-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-etcd |
kubelet |
installer-7-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b19f2d14cd886282f9e0307d8d6332af732ffab98ac5322a35a918121f2fad4" in 18.215s (18.215s including waiting) | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:70fe518883175c417f736849278c0b614ba907ce768d4f069f9ff16bdcf4b2b7" in 18.427s (18.427s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Started |
Started container csi-driver | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Created |
Created container northd | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" in 2.968s (2.968s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" in 2.934s (2.934s including waiting) | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-b7mbg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fadc119bc4c8e630b76b0df84e31adb20b5484dcaf8495d0edcfe4288f414546" in 18.423s (18.423s including waiting) | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492aea82e8accb6e690e9251e98bf5592433f92ca4d3df9bcad7af44a482559d" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Created |
Created container csi-driver | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Created |
Created container cni-plugins | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Started |
Started container northd | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Created |
Created container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Started |
Started container csi-driver | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:443e172a5bba1222249dea114b13e2df0d1b0f7992ef3b774723c8aec78bb522" in 18.45s (18.45s including waiting) | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Created |
Created container nbdb | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.48/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68a95a354a5bb6c5312ebd4670ae305b8bf0123ed426048ed5befcbfeeff3fda" in 2.683s (2.683s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container pruner | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Started |
Started container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Created |
Created container csi-liveness-probe | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container pruner | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Started |
Started container nbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Created |
Created container routeoverride-cni | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Created |
Created container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Started |
Started container csi-liveness-probe | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Created |
Created container bond-cni-plugin | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" in 2.189s (2.189s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Started |
Started container bond-cni-plugin | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Created |
Created container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Started |
Started container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" in 2.169s (2.169s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492aea82e8accb6e690e9251e98bf5592433f92ca4d3df9bcad7af44a482559d" in 1.901s (1.901s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Started |
Started container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Created |
Created container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68a95a354a5bb6c5312ebd4670ae305b8bf0123ed426048ed5befcbfeeff3fda" | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.63/23] from ovn-kubernetes | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container pruner | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Created |
Created container csi-liveness-probe | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68a95a354a5bb6c5312ebd4670ae305b8bf0123ed426048ed5befcbfeeff3fda" in 1.822s (1.822s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Started |
Started container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" in 2.142s (2.142s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Created |
Created container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Started |
Started container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" in 2.213s (2.213s including waiting) | |
openshift-kube-apiserver |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
multus |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.49/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Created |
Created container sbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Started |
Started container routeoverride-cni | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" | |
openshift-kube-apiserver |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Created |
Created container routeoverride-cni | |
openshift-kube-apiserver |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.43/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container pruner | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-tnm8w |
Started |
Started container sbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" in 5.783s (5.784s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" in 9.951s (9.951s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Created |
Created container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Created |
Created container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Created |
Created container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Created |
Created container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Started |
Started container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Started |
Started container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Created |
Created container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-w65vj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-cnwtn |
Created |
Created container kube-multus-additional-cni-plugins | |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Unhealthy |
Liveness probe failed: Get "http://:10305/healthz": dial tcp :10305: connect: connection refused |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
ProbeError |
Liveness probe error: Get "http://:10304/healthz": dial tcp :10304: connect: connection refused body: |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Unhealthy |
Liveness probe failed: Get "http://:10300/healthz": dial tcp :10300: connect: connection refused |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
ProbeError |
Liveness probe error: Get "http://:10300/healthz": dial tcp :10300: connect: connection refused body: |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
Unhealthy |
Liveness probe failed: Get "http://:10302/healthz": dial tcp :10302: connect: connection refused |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
ProbeError |
Liveness probe error: Get "http://:10302/healthz": dial tcp :10302: connect: connection refused body: |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-mv6v5 |
Unhealthy |
Liveness probe failed: Get "http://:10304/healthz": dial tcp :10304: connect: connection refused |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-qgwhz |
ProbeError |
Liveness probe error: Get "http://:10305/healthz": dial tcp :10305: connect: connection refused body: |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-czt9k |
Created |
Created container oauth-apiserver |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-czt9k |
Started |
Started container oauth-apiserver |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-czt9k |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" already present on machine |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Unhealthy |
Liveness probe failed: Get "http://:10302/healthz": dial tcp :10302: connect: connection refused |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
ProbeError |
Liveness probe error: Get "http://:10305/healthz": dial tcp :10305: connect: connection refused body: |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
ProbeError |
Liveness probe error: Get "http://:10300/healthz": dial tcp :10300: connect: connection refused body: |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
ProbeError |
Liveness probe error: Get "http://:10302/healthz": dial tcp :10302: connect: connection refused body: |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-gz7kd |
Unhealthy |
Liveness probe failed: Get "http://:10305/healthz": dial tcp :10305: connect: connection refused |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Unhealthy |
Liveness probe failed: Get "http://:10304/healthz": dial tcp :10304: connect: connection refused |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
ProbeError |
Liveness probe error: Get "http://:10304/healthz": dial tcp :10304: connect: connection refused body: |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-qmbvr |
Unhealthy |
Liveness probe failed: Get "http://:10300/healthz": dial tcp :10300: connect: connection refused |
| (x10) | openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-czt9k |
BackOff |
Back-off restarting failed container oauth-apiserver in pod apiserver-f74744fc5-czt9k_openshift-oauth-apiserver(572954e8-f4c0-45b2-9b7e-05e3760e286c) |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Created |
Created container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492aea82e8accb6e690e9251e98bf5592433f92ca4d3df9bcad7af44a482559d" | |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-66bb9945d4-25hsj |
BackOff |
Back-off restarting failed container ingress-operator in pod ingress-operator-66bb9945d4-25hsj_openshift-ingress-operator(02cfa9bb-9a97-4686-9f4e-bcc0d3c3b53c) |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-p48ld |
Created |
Created container cloud-node-manager | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Started |
Started container csi-driver | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-p48ld |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fadc119bc4c8e630b76b0df84e31adb20b5484dcaf8495d0edcfe4288f414546" in 33.45s (33.45s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Started |
Started container csi-driver | |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-p48ld |
Started |
Started container cloud-node-manager | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Created |
Created container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b19f2d14cd886282f9e0307d8d6332af732ffab98ac5322a35a918121f2fad4" in 33.442s (33.442s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:443e172a5bba1222249dea114b13e2df0d1b0f7992ef3b774723c8aec78bb522" in 35.456s (35.456s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:70fe518883175c417f736849278c0b614ba907ce768d4f069f9ff16bdcf4b2b7" in 34.445s (34.445s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Created |
Created container csi-driver | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" in 2.227s (2.227s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d830c52b43c856c7c028326d64168ace2b44f8864f626cf15036118fdcc446c" in 2.16s (2.16s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492aea82e8accb6e690e9251e98bf5592433f92ca4d3df9bcad7af44a482559d" in 2.021s (2.021s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Started |
Started container csi-node-driver-registrar | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68a95a354a5bb6c5312ebd4670ae305b8bf0123ed426048ed5befcbfeeff3fda" | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Created |
Created container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Created |
Created container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Started |
Started container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Created |
Created container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Started |
Started container bond-cni-plugin | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Created |
Created container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" in 2.137s (2.137s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8e78d4df6fa60f107524286e6b4ad9f5682dd7fc844f98414bdcf73138a75c3" in 2.156s (2.156s including waiting) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Started |
Started container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Created |
Created container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Started |
Started container csi-liveness-probe | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Created |
Created container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68a95a354a5bb6c5312ebd4670ae305b8bf0123ed426048ed5befcbfeeff3fda" in 6.296s (6.296s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
BackOff |
Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1_openshift-kube-controller-manager(c2b0703e3cb16eba542a9d4112bd2475) | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Unhealthy |
Liveness probe failed: Get "http://:10305/healthz": dial tcp :10305: connect: connection refused | |
openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
ProbeError |
Liveness probe error: Get "http://:10305/healthz": dial tcp :10305: connect: connection refused body: | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Unhealthy |
Liveness probe failed: Get "http://:10304/healthz": dial tcp :10304: connect: connection refused | |
openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
ProbeError |
Liveness probe error: Get "http://:10304/healthz": dial tcp :10304: connect: connection refused body: | |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
Unhealthy |
Liveness probe failed: Get "http://:10302/healthz": dial tcp :10302: connect: connection refused |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
Unhealthy |
Liveness probe failed: Get "http://:10300/healthz": dial tcp :10300: connect: connection refused |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-file-csi-driver-node-mft7l |
ProbeError |
Liveness probe error: Get "http://:10302/healthz": dial tcp :10302: connect: connection refused body: |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
azure-disk-csi-driver-node-6wk8q |
ProbeError |
Liveness probe error: Get "http://:10300/healthz": dial tcp :10300: connect: connection refused body: |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Created |
Created container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" in 19.248s (19.248s including waiting) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8206d4649f0806073d7dd4df10dcbbb47e35e29d0f51d15af5c0d1ba86c3a9d" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Created |
Created container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Started |
Started container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9pnbf |
Created |
Created container kube-multus-additional-cni-plugins | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container setup | |
| (x14) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdateFailed |
Failed to update Secret/service-account-private-key -n openshift-kube-controller-manager: Put "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/service-account-private-key": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-t6wgr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fadc119bc4c8e630b76b0df84e31adb20b5484dcaf8495d0edcfe4288f414546" already present on machine | |
| (x2) | openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-t6wgr |
Created |
Created container cloud-node-manager |
| (x2) | openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-t6wgr |
Started |
Started container cloud-node-manager |
| (x2) | openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-b7mbg |
Created |
Created container cloud-node-manager |
| (x2) | openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-b7mbg |
Started |
Started container cloud-node-manager |
openshift-cloud-controller-manager |
kubelet |
azure-cloud-node-manager-b7mbg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fadc119bc4c8e630b76b0df84e31adb20b5484dcaf8495d0edcfe4288f414546" already present on machine | |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-9q57h |
Started |
Started container approver |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-9q57h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-9q57h |
Created |
Created container approver |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-0_0ad5ae0c-7017-484a-9c9e-d426a7249a00 became leader | |
openshift-cloud-controller-manager-operator |
ci-op-9xx71rvq-1e28e-w667k-master-1_71456c36-db13-49c7-98e6-0c47d3422fa8 |
cluster-cloud-config-sync-leader |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-1_71456c36-db13-49c7-98e6-0c47d3422fa8 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdated |
Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 7:\nNodeInstallerDegraded: installer: 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:54:23.275218 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:54:33.274705 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:54:43.274799 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:54:53.275204 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:55:03.275731 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:55:13.275291 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:55:13.276318 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:55:13.276358 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: ") | |
| (x48) | default |
machineapioperator |
machine-api |
Status upgrade |
Progressing towards operator: 4.16.0-0.nightly-2024-06-10-211334 |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-0_76f8dd4a-d877-46d2-82d5-370dda4ac0c2 became leader | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-1 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-1 in Controller | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-0 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-0 in Controller | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 in Controller | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 in Controller | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-2 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-2 in Controller | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp event: Registered Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp in Controller | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-7df985cbf9-f4swj_983eef32-c2d7-49c2-b366-29370406f626 became leader | |
| (x4) | openshift-ingress |
service-controller |
router-default |
UpdatedLoadBalancer |
Updated load balancer with new hosts |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-5b9b5c7f89-z28dx |
Created |
Created container authentication-operator |
| (x2) | openshift-authentication-operator |
kubelet |
authentication-operator-5b9b5c7f89-z28dx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b6392aef797fc81a43507586d4924fb2f4eca833e6b01bb431df4d70849284" already present on machine |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-5b9b5c7f89-z28dx |
Started |
Started container authentication-operator |
openshift-cloud-controller-manager-operator |
ci-op-9xx71rvq-1e28e-w667k-master-1_eaf9a437-d418-46fb-b05d-8642274cca15 |
cluster-cloud-controller-manager-leader |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-1_eaf9a437-d418-46fb-b05d-8642274cca15 became leader | |
openshift-machine-api |
machine-api-controllers-857c68d88f-cpdp9_df95abd6-7557-416a-bd13-a5082d75ffb5 |
cluster-api-provider-azure-leader |
LeaderElection |
machine-api-controllers-857c68d88f-cpdp9_df95abd6-7557-416a-bd13-a5082d75ffb5 became leader | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7cfc668fc8-mplwz became leader | |
| (x3) | openshift-multus |
kubelet |
multus-4gxw6 |
BackOff |
Back-off restarting failed container kube-multus in pod multus-4gxw6_openshift-multus(ea446bb4-5aae-4ae5-aca5-f4c307f5a297) |
openshift-machine-config-operator |
machine-config-operator |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8477dc5fd6-82ddm |
BackOff |
Back-off restarting failed container machine-approver-controller in pod machine-approver-8477dc5fd6-82ddm_openshift-cluster-machine-approver(df3788b7-9514-4fe0-a98b-f9c83e403cd3) | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-28635060 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28635060 |
SuccessfulCreate |
Created pod: collect-profiles-28635060-5nb2j | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulDelete |
Deleted job collect-profiles-28635045 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-78d6c6c648 to 3 from 2 | |
openshift-apiserver |
replicaset-controller |
apiserver-7c577f45d7 |
SuccessfulDelete |
Deleted pod: apiserver-7c577f45d7-bp26v | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-7c577f45d7 to 0 from 1 | |
openshift-apiserver |
replicaset-controller |
apiserver-78d6c6c648 |
SuccessfulCreate |
Created pod: apiserver-78d6c6c648-tcdpn | |
openshift-apiserver |
kubelet |
apiserver-7c577f45d7-bp26v |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-7c577f45d7-bp26v |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
| (x3) | openshift-multus |
kubelet |
multus-7hlr6 |
BackOff |
Back-off restarting failed container kube-multus in pod multus-7hlr6_openshift-multus(cafb2f14-e04b-4ef3-ab8e-e7a98149ccfb) |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator |
azure-file-csi-driver-operator-lock |
LeaderElection |
azure-file-csi-driver-operator-66b9ff7945-fpvl2_63a337da-3278-4cd3-8eff-64fd7794d8ef became leader | |
openshift-cluster-csi-drivers |
azure-file-csi-driver-operator |
openshift-cluster-csi-drivers |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: Operation cannot be fulfilled on secrets \"service-account-private-key\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
installer errors: installer: i-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:57:49.957147 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:57:59.956878 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:09.957084 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:19.957109 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:29.957559 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:29.958346 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused F0611 10:58:29.958383 1 cmd.go:105] timed out waiting for the condition | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator |
azure-disk-csi-driver-operator-lock |
LeaderElection |
azure-disk-csi-driver-operator-7fcb8db8c9-bmkwq_a5dc466d-1182-4b4f-9f64-9539f0ff460a became leader | |
openshift-cluster-csi-drivers |
azure-disk-csi-driver-operator |
openshift-cluster-csi-drivers |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
| (x15) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdateFailed |
Failed to update Secret/service-account-private-key -n openshift-kube-controller-manager: Operation cannot be fulfilled on secrets "service-account-private-key": the object has been modified; please apply your changes to the latest version and try again |
| (x4) | openshift-multus |
kubelet |
multus-4gxw6 |
Created |
Created container kube-multus |
openshift-cluster-machine-approver |
ci-op-9xx71rvq-1e28e-w667k-master-1_1632646e-07a0-489e-97fd-2e832eb0d34b |
cluster-machine-approver-leader |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-1_1632646e-07a0-489e-97fd-2e832eb0d34b became leader | |
| (x3) | openshift-multus |
kubelet |
multus-4gxw6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" already present on machine |
| (x2) | openshift-cluster-machine-approver |
kubelet |
machine-approver-8477dc5fd6-82ddm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ece0819f12f73bd0f0a7c2b2d8034aeb5a68929dec9044efe8d6971a779f3ffd" already present on machine |
| (x4) | openshift-multus |
kubelet |
multus-4gxw6 |
Started |
Started container kube-multus |
| (x3) | openshift-cluster-machine-approver |
kubelet |
machine-approver-8477dc5fd6-82ddm |
Started |
Started container machine-approver-controller |
| (x3) | openshift-cluster-machine-approver |
kubelet |
machine-approver-8477dc5fd6-82ddm |
Created |
Created container machine-approver-controller |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-7c577f45d7-bp26v |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
| (x3) | openshift-apiserver |
kubelet |
apiserver-7c577f45d7-bp26v |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x4) | openshift-multus |
kubelet |
multus-7hlr6 |
Started |
Started container kube-multus |
| (x4) | openshift-multus |
kubelet |
multus-7hlr6 |
Created |
Created container kube-multus |
| (x3) | openshift-multus |
kubelet |
multus-7hlr6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" already present on machine |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-tcdpn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine | |
openshift-apiserver |
multus |
apiserver-78d6c6c648-tcdpn |
AddedInterface |
Add eth0 [10.130.0.44/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-tcdpn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-tcdpn |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-tcdpn |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-tcdpn |
Started |
Started container fix-audit-permissions | |
| (x3) | openshift-multus |
kubelet |
multus-r82gp |
BackOff |
Back-off restarting failed container kube-multus in pod multus-r82gp_openshift-multus(a802e3d5-2d02-4746-839a-87f1c9de3547) |
openshift-kube-controller-manager |
kubelet |
installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-tcdpn |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-tcdpn |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-tcdpn |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-tcdpn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-controller-manager |
multus |
installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.50/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-f74744fc5 to 3 from 2 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-f74744fc5 |
SuccessfulCreate |
Created pod: apiserver-f74744fc5-d2ds7 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6777f8cb5c |
SuccessfulDelete |
Deleted pod: apiserver-6777f8cb5c-cl69q | |
openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-cl69q |
Killing |
Stopping container oauth-apiserver | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6777f8cb5c to 0 from 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 7 triggered by "required secret/service-account-private-key has changed" | |
openshift-marketplace |
kubelet |
community-operators-gv6mm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
InstallerPodFailed |
installer errors: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:57:59.941259 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:09.940822 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:19.940980 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:29.941484 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:39.940922 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:39.941659 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused F0611 10:58:39.941689 1 cmd.go:106] timed out waiting for the condition |
openshift-marketplace |
multus |
community-operators-gv6mm |
AddedInterface |
Add eth0 [10.130.0.45/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-7 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-ff97n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
multus |
certified-operators-ff97n |
AddedInterface |
Add eth0 [10.130.0.46/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
redhat-marketplace-qvmqw |
AddedInterface |
Add eth0 [10.130.0.47/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-gv6mm |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.16" | |
openshift-marketplace |
kubelet |
certified-operators-ff97n |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-qvmqw |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.16" | |
openshift-marketplace |
kubelet |
redhat-marketplace-qvmqw |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-ff97n |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.16" | |
openshift-marketplace |
kubelet |
redhat-marketplace-qvmqw |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-qvmqw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-gv6mm |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-ff97n |
Created |
Created container extract-utilities | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-7 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
multus |
redhat-operators-zkfnp |
AddedInterface |
Add eth0 [10.130.0.48/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-gv6mm |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-zkfnp |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.16" | |
openshift-marketplace |
kubelet |
redhat-marketplace-qvmqw |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.16" in 495ms (495ms including waiting) | |
openshift-marketplace |
kubelet |
community-operators-gv6mm |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.16" in 515ms (515ms including waiting) | |
openshift-marketplace |
kubelet |
certified-operators-ff97n |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-ff97n |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
redhat-marketplace-qvmqw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
redhat-marketplace-qvmqw |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-ff97n |
Created |
Created container extract-content | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-7 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
community-operators-gv6mm |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-zkfnp |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-zkfnp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-qvmqw |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-zkfnp |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-ff97n |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.16" in 497ms (497ms including waiting) | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-7 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
community-operators-gv6mm |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-ff97n |
Started |
Started container registry-server | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-7 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
redhat-marketplace-qvmqw |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-ff97n |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-ff97n |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 593ms (593ms including waiting) | |
openshift-marketplace |
kubelet |
redhat-marketplace-qvmqw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 657ms (657ms including waiting) | |
openshift-marketplace |
kubelet |
redhat-marketplace-qvmqw |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-zkfnp |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-zkfnp |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-zkfnp |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.16" in 571ms (571ms including waiting) | |
openshift-marketplace |
kubelet |
community-operators-gv6mm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-7 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
community-operators-gv6mm |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
community-operators-gv6mm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 568ms (568ms including waiting) | |
openshift-marketplace |
kubelet |
redhat-operators-zkfnp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
community-operators-gv6mm |
Started |
Started container registry-server | |
| (x4) | openshift-multus |
kubelet |
multus-r82gp |
Created |
Created container kube-multus |
| (x4) | openshift-multus |
kubelet |
multus-r82gp |
Started |
Started container kube-multus |
openshift-marketplace |
kubelet |
redhat-operators-zkfnp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 725ms (725ms including waiting) | |
openshift-marketplace |
kubelet |
redhat-operators-zkfnp |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-zkfnp |
Started |
Started container registry-server | |
| (x3) | openshift-multus |
kubelet |
multus-r82gp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9518d76d829701a272518e2eeed8438692e49392b35d0f4b7dc897726e32824a" already present on machine |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-7 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-7 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-7 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: Operation cannot be fulfilled on secrets \"service-account-private-key\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: i-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:57:49.957147 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:57:59.956878 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:09.957084 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:19.957109 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:29.957559 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:29.958346 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:58:29.958383 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-marketplace |
multus |
certified-operators-xlp8k |
AddedInterface |
Add eth0 [10.130.0.50/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-7 -n openshift-kube-controller-manager because it was missing | |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-cl69q |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-6777f8cb5c-cl69q |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-marketplace |
multus |
community-operators-gv7zt |
AddedInterface |
Add eth0 [10.130.0.49/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-gv7zt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-xlp8k |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-mbjsk |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.16" | |
openshift-marketplace |
kubelet |
community-operators-gv7zt |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-gv7zt |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-gv7zt |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.16" | |
openshift-marketplace |
kubelet |
certified-operators-xlp8k |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-xlp8k |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-xlp8k |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.16" | |
openshift-marketplace |
kubelet |
redhat-operators-nnx4s |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-mbjsk |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-mbjsk |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-nnx4s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
multus |
redhat-operators-nnx4s |
AddedInterface |
Add eth0 [10.130.0.52/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-mbjsk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
multus |
redhat-marketplace-mbjsk |
AddedInterface |
Add eth0 [10.130.0.51/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-7 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
community-operators-2czqg |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-xlp8k |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.16" in 599ms (599ms including waiting) | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 7 triggered by "required secret/service-account-private-key has changed" | |
openshift-marketplace |
multus |
community-operators-2czqg |
AddedInterface |
Add eth0 [10.130.0.53/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-2czqg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-2czqg |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-xlp8k |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-xlp8k |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-2czqg |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.16" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreate |
Revision 7 created because required secret/service-account-private-key has changed | |
openshift-marketplace |
kubelet |
redhat-marketplace-mbjsk |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.16" in 540ms (540ms including waiting) | |
openshift-marketplace |
kubelet |
redhat-marketplace-mbjsk |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-mbjsk |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-mbjsk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
community-operators-gv7zt |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-gv7zt |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-gv7zt |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.16" in 609ms (609ms including waiting) | |
openshift-marketplace |
kubelet |
redhat-operators-nnx4s |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-nnx4s |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.16" | |
openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-d2ds7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" already present on machine | |
openshift-oauth-apiserver |
multus |
apiserver-f74744fc5-d2ds7 |
AddedInterface |
Add eth0 [10.130.0.55/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-nnx4s |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-nnx4s |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.16" in 555ms (555ms including waiting) | |
openshift-marketplace |
kubelet |
certified-operators-xlp8k |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
community-operators-2czqg |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.16" in 557ms (557ms including waiting) | |
openshift-marketplace |
kubelet |
redhat-marketplace-mbjsk |
Started |
Started container registry-server | |
openshift-marketplace |
multus |
certified-operators-q5sfs |
AddedInterface |
Add eth0 [10.130.0.54/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-mbjsk |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-q5sfs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-gv7zt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
certified-operators-q5sfs |
Created |
Created container extract-utilities | |
default |
ovnkube-csr-approver-controller |
csr-vfb7d |
CSRApproved |
CSR "csr-vfb7d" has been approved | |
openshift-marketplace |
kubelet |
redhat-marketplace-mbjsk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 562ms (562ms including waiting) | |
openshift-marketplace |
kubelet |
certified-operators-q5sfs |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-nnx4s |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-2czqg |
Started |
Started container extract-content | |
openshift-network-node-identity |
ci-op-9xx71rvq-1e28e-w667k-master-2_b3cb6253-4785-47ef-b8ca-ac67bbfed3c3 |
ovnkube-identity |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-2_b3cb6253-4785-47ef-b8ca-ac67bbfed3c3 became leader | |
openshift-marketplace |
kubelet |
community-operators-2czqg |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-q5sfs |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.16" | |
default |
ovnkube-csr-approver-controller |
csr-2kt27 |
CSRApproved |
CSR "csr-2kt27" has been approved | |
default |
ovnkube-csr-approver-controller |
csr-7bzrf |
CSRApproved |
CSR "csr-7bzrf" has been approved | |
default |
ovnkube-csr-approver-controller |
csr-gwzqx |
CSRApproved |
CSR "csr-gwzqx" has been approved | |
openshift-marketplace |
kubelet |
certified-operators-xlp8k |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 476ms (477ms including waiting) | |
default |
ovnkube-csr-approver-controller |
csr-kxshz |
CSRApproved |
CSR "csr-kxshz" has been approved | |
openshift-marketplace |
kubelet |
certified-operators-q5sfs |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.16" in 599ms (599ms including waiting) | |
openshift-marketplace |
kubelet |
community-operators-gv7zt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 575ms (575ms including waiting) | |
openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-d2ds7 |
Created |
Created container fix-audit-permissions | |
openshift-marketplace |
multus |
redhat-marketplace-zq589 |
AddedInterface |
Add eth0 [10.130.0.56/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2" to "GuardControllerDegraded: Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:57:59.941259 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:09.940822 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:19.940980 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:29.941484 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.940922 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.941659 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:58:39.941689 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-marketplace |
kubelet |
redhat-marketplace-zq589 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-d2ds7 |
Started |
Started container fix-audit-permissions | |
default |
controlplane |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
ErrorAddingResource |
[cannot allocate hybrid overlay distributed router ip for nodes until all initial pods are processed, failed to set up hybrid overlay logical switch port for ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9: cannot set up hybrid overlay ports, distributed router ip is nil] | |
openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-d2ds7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca16980fc0e2808b2bab35cc848ad16da6f79e43fd4cacf17d77d98c0d581d02" already present on machine | |
openshift-marketplace |
multus |
redhat-operators-k7n2j |
AddedInterface |
Add eth0 [10.130.0.57/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-xlp8k |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-k7n2j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: i-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:57:49.957147 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:57:59.956878 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:09.957084 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:19.957109 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:29.957559 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:29.958346 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:58:29.958383 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 5; 1 node is at revision 6" to "NodeInstallerProgressing: 2 nodes are at revision 5; 1 node is at revision 6; 0 nodes have achieved new revision 7",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 node is at revision 6" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 node is at revision 6; 0 nodes have achieved new revision 7" | |
default |
controlplane |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
ErrorAddingResource |
[cannot allocate hybrid overlay distributed router ip for nodes until all initial pods are processed, failed to set up hybrid overlay logical switch port for ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp: cannot set up hybrid overlay ports, distributed router ip is nil] | |
openshift-marketplace |
kubelet |
community-operators-gv7zt |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
community-operators-2czqg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
certified-operators-q5sfs |
Created |
Created container extract-content | |
default |
controlplane |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
ErrorUpdatingResource |
failed to set up hybrid overlay logical switch port for ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49: cannot set up hybrid overlay ports, distributed router ip is nil | |
default |
controlplane |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
ErrorUpdatingResource |
failed to set up hybrid overlay logical switch port for ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp: cannot set up hybrid overlay ports, distributed router ip is nil | |
openshift-marketplace |
kubelet |
redhat-marketplace-zq589 |
Created |
Created container extract-utilities | |
default |
controlplane |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
ErrorUpdatingResource |
failed to set up hybrid overlay logical switch port for ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9: cannot set up hybrid overlay ports, distributed router ip is nil | |
openshift-marketplace |
kubelet |
redhat-operators-nnx4s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 584ms (584ms including waiting) | |
openshift-marketplace |
kubelet |
redhat-operators-nnx4s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
community-operators-2czqg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 599ms (599ms including waiting) | |
default |
controlplane |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
ErrorAddingResource |
[cannot allocate hybrid overlay distributed router ip for nodes until all initial pods are processed, failed to set up hybrid overlay logical switch port for ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49: cannot set up hybrid overlay ports, distributed router ip is nil] | |
openshift-marketplace |
kubelet |
redhat-operators-k7n2j |
Created |
Created container extract-utilities | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-scheduler because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-d2ds7 |
Created |
Created container oauth-apiserver | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-controller-manager |
multus |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
default |
ovnkube-csr-approver-controller |
csr-hvxrs |
CSRApproved |
CSR "csr-hvxrs" has been approved | |
openshift-kube-scheduler |
kubelet |
installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-kube-scheduler |
multus |
installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.58/23] from ovn-kubernetes | |
default |
ovnkube-csr-approver-controller |
csr-6xkl4 |
CSRApproved |
CSR "csr-6xkl4" has been approved | |
openshift-marketplace |
kubelet |
certified-operators-xlp8k |
Started |
Started container registry-server | |
default |
ovnkube-csr-approver-controller |
csr-2lgj4 |
CSRApproved |
CSR "csr-2lgj4" has been approved | |
openshift-kube-controller-manager |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-kube-controller-manager |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-marketplace |
kubelet |
community-operators-gv7zt |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-q5sfs |
Started |
Started container extract-content | |
openshift-kube-scheduler |
kubelet |
installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container installer | |
openshift-marketplace |
kubelet |
certified-operators-q5sfs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 554ms (554ms including waiting) | |
openshift-marketplace |
kubelet |
community-operators-2czqg |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-nnx4s |
Created |
Created container registry-server | |
openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-d2ds7 |
Started |
Started container oauth-apiserver | |
openshift-marketplace |
kubelet |
certified-operators-q5sfs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
redhat-operators-k7n2j |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-zq589 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-k7n2j |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.16" | |
openshift-marketplace |
kubelet |
redhat-marketplace-zq589 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.16" | |
openshift-marketplace |
kubelet |
redhat-marketplace-zq589 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.16" in 524ms (524ms including waiting) | |
openshift-marketplace |
kubelet |
redhat-operators-k7n2j |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.16" in 503ms (503ms including waiting) | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-tcdpn |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [-]etcd-readiness failed: reason withheld [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]shutdown ok readyz check failed | |
openshift-marketplace |
kubelet |
community-operators-2czqg |
Started |
Started container registry-server | |
openshift-kube-scheduler |
kubelet |
installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container installer | |
openshift-marketplace |
kubelet |
redhat-marketplace-zq589 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-q5sfs |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-zq589 |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-nnx4s |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-k7n2j |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-k7n2j |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-q5sfs |
Started |
Started container registry-server | |
openshift-network-diagnostics |
multus |
network-check-target-mgs54 |
AddedInterface |
Add eth0 [10.128.2.5/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-zq589 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-zq589 |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-zq589 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
redhat-marketplace-zq589 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 527ms (527ms including waiting) | |
openshift-network-diagnostics |
multus |
network-check-target-8qg9z |
AddedInterface |
Add eth0 [10.131.0.5/23] from ovn-kubernetes | |
openshift-multus |
multus |
network-metrics-daemon-8xrbm |
AddedInterface |
Add eth0 [10.131.0.4/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 7:\nNodeInstallerDegraded: installer: 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:54:23.275218 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:54:33.274705 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:54:43.274799 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:54:53.275204 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:55:03.275731 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:55:13.275291 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:55:13.276318 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:55:13.276358 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "NodeInstallerDegraded: 1 nodes are failing on revision 7:\nNodeInstallerDegraded: installer: 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:57:51.967859 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:01.968030 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:11.967460 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:21.968015 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:31.968088 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:41.967791 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:41.968659 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:58:41.968692 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
kubelet |
redhat-operators-k7n2j |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
InstallerPodFailed |
installer errors: installer: 172.30.0.1:443: connect: connection refused W0611 10:57:51.967859 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:01.968030 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:11.967460 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:21.968015 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:31.968088 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:41.967791 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:41.968659 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused F0611 10:58:41.968692 1 cmd.go:105] timed out waiting for the condition | |
openshift-marketplace |
kubelet |
redhat-operators-k7n2j |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 591ms (591ms including waiting) | |
default |
kubelet |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
NodeReady |
Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 status is now: NodeReady | |
default |
kubelet |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
NodeReady |
Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 status is now: NodeReady | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-g5zzn | |
openshift-ingress-canary |
daemonset-controller |
ingress-canary |
SuccessfulCreate |
Created pod: ingress-canary-xv252 | |
openshift-marketplace |
kubelet |
redhat-operators-k7n2j |
Created |
Created container registry-server | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-sn28d | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 7:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:08.460733 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:18.460770 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:28.461125 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:38.461572 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:48.460760 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:48.461504 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:58:48.461542 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-network-operator |
kubelet |
iptables-alerter-sn28d |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" already present on machine | |
openshift-ingress |
kubelet |
router-default-7c66d9f4d8-hjjcl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:48910918f7c73a9f9ad6490fcead5fae8c17ab3e32beb778627c3dcbc8e3387c" | |
openshift-ingress-canary |
multus |
ingress-canary-rcqsw |
AddedInterface |
Add eth0 [10.128.2.8/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
taint-eviction-controller |
collect-profiles-28635060-5nb2j |
TaintManagerEviction |
Cancelling deletion of Pod openshift-operator-lifecycle-manager/collect-profiles-28635060-5nb2j | |
openshift-dns |
kubelet |
dns-default-t22wm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db5c50d6151f584e498cd06f68ef6504fd0a35ff24943ecb50156062881d608e" | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-28635060-5nb2j |
AddedInterface |
Add eth0 [10.129.2.9/23] from ovn-kubernetes | |
openshift-ingress |
multus |
router-default-7c66d9f4d8-hjjcl |
AddedInterface |
Add eth0 [10.129.2.7/23] from ovn-kubernetes | |
openshift-dns |
kubelet |
dns-default-g5zzn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db5c50d6151f584e498cd06f68ef6504fd0a35ff24943ecb50156062881d608e" | |
openshift-dns |
multus |
dns-default-g5zzn |
AddedInterface |
Add eth0 [10.129.2.12/23] from ovn-kubernetes | |
openshift-ingress |
taint-eviction-controller |
router-default-7c66d9f4d8-hjjcl |
TaintManagerEviction |
Cancelling deletion of Pod openshift-ingress/router-default-7c66d9f4d8-hjjcl | |
openshift-network-diagnostics |
multus |
network-check-source-775df55c85-86pxw |
AddedInterface |
Add eth0 [10.129.2.10/23] from ovn-kubernetes | |
openshift-ingress-canary |
daemonset-controller |
ingress-canary |
SuccessfulCreate |
Created pod: ingress-canary-rcqsw | |
openshift-network-diagnostics |
taint-eviction-controller |
network-check-source-775df55c85-86pxw |
TaintManagerEviction |
Cancelling deletion of Pod openshift-network-diagnostics/network-check-source-775df55c85-86pxw | |
openshift-network-diagnostics |
kubelet |
network-check-source-775df55c85-86pxw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:74a88136c1f22a00a7ffee265c05f3e0101ba89a3b297e2027fcc9d53230b6a1" | |
openshift-monitoring |
taint-eviction-controller |
prometheus-operator-admission-webhook-566b55489f-2ktqr |
TaintManagerEviction |
Cancelling deletion of Pod openshift-monitoring/prometheus-operator-admission-webhook-566b55489f-2ktqr | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-566b55489f-2ktqr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9cb2094942cf6eba4ae69c856e11222e922ad4d839506d3e95913b068ec88c3" | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-566b55489f-2ktqr |
AddedInterface |
Add eth0 [10.129.2.8/23] from ovn-kubernetes | |
openshift-ingress-canary |
kubelet |
ingress-canary-xv252 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2086171405832d77db9abba287eaf6ec94d517ad8d8056a31b5b75dc2c421162" | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-vfv6g | |
openshift-dns |
multus |
dns-default-t22wm |
AddedInterface |
Add eth0 [10.128.2.7/23] from ovn-kubernetes | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-t22wm | |
openshift-ingress-canary |
multus |
ingress-canary-xv252 |
AddedInterface |
Add eth0 [10.129.2.11/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28635060-5nb2j |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" | |
openshift-ingress-canary |
kubelet |
ingress-canary-rcqsw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2086171405832d77db9abba287eaf6ec94d517ad8d8056a31b5b75dc2c421162" | |
openshift-dns |
endpoint-slice-controller |
dns-default |
TopologyAwareHintsDisabled |
Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (3 endpoints, 2 zones), addressType: IPv4 | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-566b55489f-wzvmv |
AddedInterface |
Add eth0 [10.128.2.9/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-k7n2j |
Started |
Started container registry-server | |
openshift-ingress |
multus |
router-default-7c66d9f4d8-wk77v |
AddedInterface |
Add eth0 [10.128.2.10/23] from ovn-kubernetes | |
openshift-ingress-canary |
daemonset-controller |
ingress-canary |
SuccessfulCreate |
Created pod: ingress-canary-4skx2 | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-hpmwj | |
default |
kubelet |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
NodeReady |
Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp status is now: NodeReady | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-6qw2v | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-566b55489f-wzvmv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9cb2094942cf6eba4ae69c856e11222e922ad4d839506d3e95913b068ec88c3" | |
openshift-ingress |
kubelet |
router-default-7c66d9f4d8-wk77v |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:48910918f7c73a9f9ad6490fcead5fae8c17ab3e32beb778627c3dcbc8e3387c" | |
openshift-ingress-canary |
multus |
ingress-canary-4skx2 |
AddedInterface |
Add eth0 [10.131.0.8/23] from ovn-kubernetes | |
openshift-dns |
multus |
dns-default-6qw2v |
AddedInterface |
Add eth0 [10.131.0.7/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-gv6mm |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
community-operators-gv7zt |
Killing |
Stopping container registry-server | |
openshift-ingress-canary |
kubelet |
ingress-canary-4skx2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2086171405832d77db9abba287eaf6ec94d517ad8d8056a31b5b75dc2c421162" | |
openshift-dns |
kubelet |
dns-default-6qw2v |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db5c50d6151f584e498cd06f68ef6504fd0a35ff24943ecb50156062881d608e" | |
openshift-marketplace |
kubelet |
certified-operators-xlp8k |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
community-operators-2czqg |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-qvmqw |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-ff97n |
Killing |
Stopping container registry-server | |
openshift-dns |
kubelet |
dns-default-t22wm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-566b55489f-wzvmv |
Created |
Created container prometheus-operator-admission-webhook | |
openshift-marketplace |
kubelet |
certified-operators-q5sfs |
Killing |
Stopping container registry-server | |
openshift-dns |
kubelet |
dns-default-t22wm |
Created |
Created container kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-marketplace-mbjsk |
Killing |
Stopping container registry-server | |
openshift-dns |
kubelet |
dns-default-t22wm |
Started |
Started container dns | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-566b55489f-wzvmv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9cb2094942cf6eba4ae69c856e11222e922ad4d839506d3e95913b068ec88c3" in 4.015s (4.015s including waiting) | |
openshift-dns |
kubelet |
dns-default-t22wm |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-t22wm |
Created |
Created container dns | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-566b55489f-wzvmv |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-ingress-canary |
kubelet |
ingress-canary-rcqsw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2086171405832d77db9abba287eaf6ec94d517ad8d8056a31b5b75dc2c421162" in 5.075s (5.075s including waiting) | |
openshift-ingress-canary |
kubelet |
ingress-canary-rcqsw |
Created |
Created container serve-healthcheck-canary | |
openshift-ingress-canary |
kubelet |
ingress-canary-rcqsw |
Started |
Started container serve-healthcheck-canary | |
openshift-dns |
kubelet |
dns-default-t22wm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db5c50d6151f584e498cd06f68ef6504fd0a35ff24943ecb50156062881d608e" in 5.078s (5.078s including waiting) | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
installer errors: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:08.460733 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:18.460770 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:28.461125 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:38.461572 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:48.460760 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 10:58:48.461504 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused F0611 10:58:48.461542 1 cmd.go:106] timed out waiting for the condition |
openshift-marketplace |
kubelet |
redhat-marketplace-zq589 |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-nnx4s |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-k7n2j |
Killing |
Stopping container registry-server | |
openshift-ingress |
kubelet |
router-default-7c66d9f4d8-wk77v |
Started |
Started container router | |
openshift-ingress |
kubelet |
router-default-7c66d9f4d8-wk77v |
Created |
Created container router | |
openshift-ingress |
kubelet |
router-default-7c66d9f4d8-wk77v |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:48910918f7c73a9f9ad6490fcead5fae8c17ab3e32beb778627c3dcbc8e3387c" in 5.613s (5.613s including waiting) | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-566b55489f-2ktqr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9cb2094942cf6eba4ae69c856e11222e922ad4d839506d3e95913b068ec88c3" in 8.079s (8.079s including waiting) | |
openshift-ingress-canary |
kubelet |
ingress-canary-4skx2 |
Created |
Created container serve-healthcheck-canary | |
openshift-dns |
kubelet |
dns-default-6qw2v |
Started |
Started container dns | |
openshift-dns |
kubelet |
dns-default-6qw2v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-dns |
kubelet |
dns-default-6qw2v |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db5c50d6151f584e498cd06f68ef6504fd0a35ff24943ecb50156062881d608e" in 4.522s (4.522s including waiting) | |
openshift-marketplace |
kubelet |
redhat-operators-zkfnp |
Killing |
Stopping container registry-server | |
openshift-ingress-canary |
kubelet |
ingress-canary-4skx2 |
Started |
Started container serve-healthcheck-canary | |
openshift-dns |
kubelet |
dns-default-6qw2v |
Created |
Created container dns | |
openshift-ingress-canary |
kubelet |
ingress-canary-4skx2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2086171405832d77db9abba287eaf6ec94d517ad8d8056a31b5b75dc2c421162" in 4.546s (4.546s including waiting) | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well") | |
openshift-dns |
kubelet |
dns-default-6qw2v |
Started |
Started container kube-rbac-proxy | |
openshift-network-operator |
kubelet |
iptables-alerter-sn28d |
Started |
Started container iptables-alerter | |
openshift-network-operator |
kubelet |
iptables-alerter-sn28d |
Created |
Created container iptables-alerter | |
openshift-dns |
kubelet |
dns-default-6qw2v |
Created |
Created container kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-ingress |
kubelet |
router-default-7c66d9f4d8-hjjcl |
Started |
Started container router | |
openshift-network-diagnostics |
kubelet |
network-check-source-775df55c85-86pxw |
Started |
Started container check-endpoints | |
openshift-dns |
kubelet |
dns-default-g5zzn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db5c50d6151f584e498cd06f68ef6504fd0a35ff24943ecb50156062881d608e" in 13.015s (13.015s including waiting) | |
openshift-network-diagnostics |
kubelet |
network-check-source-775df55c85-86pxw |
Created |
Created container check-endpoints | |
| (x29) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9_openshift-machine-config-operator(7c0573f666d5e542150fa41029a3b8d0) |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-566b55489f-2ktqr |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-ingress-canary |
kubelet |
ingress-canary-xv252 |
Created |
Created container serve-healthcheck-canary | |
openshift-ingress-canary |
kubelet |
ingress-canary-xv252 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2086171405832d77db9abba287eaf6ec94d517ad8d8056a31b5b75dc2c421162" in 13.041s (13.041s including waiting) | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28635060-5nb2j |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" in 13.179s (13.179s including waiting) | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28635060-5nb2j |
Created |
Created container collect-profiles | |
openshift-ingress |
kubelet |
router-default-7c66d9f4d8-hjjcl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:48910918f7c73a9f9ad6490fcead5fae8c17ab3e32beb778627c3dcbc8e3387c" in 13.342s (13.342s including waiting) | |
openshift-network-diagnostics |
kubelet |
network-check-source-775df55c85-86pxw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:74a88136c1f22a00a7ffee265c05f3e0101ba89a3b297e2027fcc9d53230b6a1" in 13.046s (13.046s including waiting) | |
openshift-ingress |
kubelet |
router-default-7c66d9f4d8-hjjcl |
Created |
Created container router | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-566b55489f-2ktqr |
Created |
Created container prometheus-operator-admission-webhook | |
openshift-ingress-canary |
kubelet |
ingress-canary-xv252 |
Started |
Started container serve-healthcheck-canary | |
openshift-dns |
kubelet |
dns-default-g5zzn |
Started |
Started container dns | |
openshift-dns |
kubelet |
dns-default-g5zzn |
Created |
Created container dns | |
openshift-dns |
kubelet |
dns-default-g5zzn |
Created |
Created container kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28635060-5nb2j |
Started |
Started container collect-profiles | |
openshift-dns |
kubelet |
dns-default-g5zzn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-dns |
kubelet |
dns-default-g5zzn |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing | |
| (x28) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49_openshift-machine-config-operator(912b75dd3c3be97fb98fc6bf937eefb7) |
openshift-monitoring |
replicaset-controller |
prometheus-operator-9cd6bf8d5 |
SuccessfulCreate |
Created pod: prometheus-operator-9cd6bf8d5-d8nbk | |
openshift-monitoring |
multus |
prometheus-operator-9cd6bf8d5-d8nbk |
AddedInterface |
Add eth0 [10.130.0.59/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-9cd6bf8d5-d8nbk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcb3b0af68cb042fa56c1eac2be34b4ff2d766e2c8e4d769349b5a8b4bfff37e" | |
openshift-monitoring |
deployment-controller |
prometheus-operator |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-9cd6bf8d5 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing | |
| (x10) | openshift-ingress |
kubelet |
router-default-7c66d9f4d8-wk77v |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
| (x26) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp_openshift-machine-config-operator(6964b178dcd3596d4b9423b6a34c1587) |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine |
openshift-monitoring |
kubelet |
prometheus-operator-9cd6bf8d5-d8nbk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-dns |
endpoint-slice-controller |
dns-default |
TopologyAwareHintsDisabled |
Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (4 endpoints, 3 zones), addressType: IPv4 | |
openshift-monitoring |
kubelet |
prometheus-operator-9cd6bf8d5-d8nbk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fcb3b0af68cb042fa56c1eac2be34b4ff2d766e2c8e4d769349b5a8b4bfff37e" in 2.689s (2.69s including waiting) | |
openshift-monitoring |
kubelet |
prometheus-operator-9cd6bf8d5-d8nbk |
Created |
Created container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-9cd6bf8d5-d8nbk |
Started |
Started container prometheus-operator | |
| (x11) | openshift-ingress |
kubelet |
router-default-7c66d9f4d8-wk77v |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-7-retry-2-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-etcd because it was missing | |
openshift-monitoring |
kubelet |
prometheus-operator-9cd6bf8d5-d8nbk |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-9cd6bf8d5-d8nbk |
Created |
Created container kube-rbac-proxy | |
openshift-etcd |
kubelet |
installer-7-retry-2-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 container \"kube-controller-manager-cert-syncer\" is terminated: Error: https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0611 11:00:39.166556 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: W0611 11:00:58.851680 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0611 11:00:58.851777 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: W0611 11:01:18.217883 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0611 11:01:18.217957 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: F0611 11:01:18.351928 1 base_controller.go:96] unable to sync caches for CertSyncController\nStaticPodsDegraded: " | |
openshift-etcd |
kubelet |
installer-7-retry-2-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-7-retry-2-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
| (x2) | openshift-dns |
endpoint-slice-controller |
dns-default |
TopologyAwareHintsDisabled |
Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 3 zones), addressType: IPv4 |
openshift-etcd |
multus |
installer-7-retry-2-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-controller-manager-cert-syncer |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-controller-manager-cert-syncer |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28635060 |
Completed |
Job completed | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-28635060, status: Complete | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-kube-scheduler |
static-pod-installer |
installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
deployment-controller |
kube-state-metrics |
ScalingReplicaSet |
Scaled up replica set kube-state-metrics-598b4cb887 to 1 | |
openshift-monitoring |
replicaset-controller |
openshift-state-metrics-86886ccdb8 |
SuccessfulCreate |
Created pod: openshift-state-metrics-86886ccdb8-6v5s2 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-7-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
deployment-controller |
openshift-state-metrics |
ScalingReplicaSet |
Scaled up replica set openshift-state-metrics-86886ccdb8 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container wait-for-host-port | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-lxgj9 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container wait-for-host-port | |
openshift-kube-apiserver |
multus |
installer-7-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.53/23] from ovn-kubernetes | |
openshift-monitoring |
replicaset-controller |
kube-state-metrics-598b4cb887 |
SuccessfulCreate |
Created pod: kube-state-metrics-598b4cb887-xkxn7 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:57:59.941259 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:09.940822 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:19.940980 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:29.941484 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.940922 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.941659 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:58:39.941689 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: " to "GuardControllerDegraded: Missing PodIP in operand openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-2 on node ci-op-9xx71rvq-1e28e-w667k-master-2\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:57:59.941259 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:09.940822 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:19.940980 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:29.941484 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.940922 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.941659 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:58:39.941689 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-monitoring |
kubelet |
openshift-state-metrics-86886ccdb8-6v5s2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-h2gfv | |
openshift-monitoring |
kubelet |
openshift-state-metrics-86886ccdb8-6v5s2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa7ef88f868ac71e4ef7fddd109c920efaece57f089e237830b24b9c256c8ba4" | |
openshift-monitoring |
kubelet |
openshift-state-metrics-86886ccdb8-6v5s2 |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-86886ccdb8-6v5s2 |
Created |
Created container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
node-exporter-lxgj9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container kube-scheduler-cert-syncer | |
openshift-kube-apiserver |
kubelet |
installer-7-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-monitoring |
kubelet |
openshift-state-metrics-86886ccdb8-6v5s2 |
Started |
Started container kube-rbac-proxy-main | |
openshift-kube-apiserver |
kubelet |
installer-7-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-86886ccdb8-6v5s2 |
Created |
Created container kube-rbac-proxy-main | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-apiserver |
kubelet |
installer-7-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-monitoring |
kubelet |
openshift-state-metrics-86886ccdb8-6v5s2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container kube-scheduler-recovery-controller | |
openshift-monitoring |
multus |
openshift-state-metrics-86886ccdb8-6v5s2 |
AddedInterface |
Add eth0 [10.131.0.9/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container kube-scheduler | |
| (x2) | openshift-monitoring |
endpoint-controller |
node-exporter |
FailedToUpdateEndpoint |
Failed to update endpoint openshift-monitoring/node-exporter: Operation cannot be fulfilled on endpoints "node-exporter": the object has been modified; please apply your changes to the latest version and try again |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-4xjkt | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-j8fxj | |
openshift-monitoring |
kubelet |
node-exporter-j8fxj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-w5svb | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-ppw7h | |
openshift-monitoring |
kubelet |
node-exporter-4xjkt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-w5svb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" | |
openshift-monitoring |
kubelet |
kube-state-metrics-598b4cb887-xkxn7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b9279ce7b6bdf993e9b37924ece65982b806475f0a633ef2eae1c5c960e5e1d" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 container \"kube-controller-manager-cert-syncer\" is terminated: Error: https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0611 11:00:39.166556 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: W0611 11:00:58.851680 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0611 11:00:58.851777 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: W0611 11:01:18.217883 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0611 11:01:18.217957 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: F0611 11:01:18.351928 1 base_controller.go:96] unable to sync caches for CertSyncController\nStaticPodsDegraded: " to "NodeControllerDegraded: All master nodes are ready" | |
openshift-monitoring |
kubelet |
node-exporter-ppw7h |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" | |
openshift-monitoring |
multus |
kube-state-metrics-598b4cb887-xkxn7 |
AddedInterface |
Add eth0 [10.131.0.10/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
node-exporter-h2gfv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing | |
| (x10) | openshift-ingress |
kubelet |
router-default-7c66d9f4d8-hjjcl |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-j8fxj |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-h2gfv |
Created |
Created container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-ppw7h |
Created |
Created container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-h2gfv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" in 2.174s (2.174s including waiting) | |
openshift-monitoring |
kubelet |
node-exporter-lxgj9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" in 2.228s (2.228s including waiting) | |
| (x11) | openshift-ingress |
kubelet |
router-default-7c66d9f4d8-hjjcl |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/grpc-tls -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-ppw7h |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" in 2.085s (2.085s including waiting) | |
openshift-monitoring |
kubelet |
node-exporter-j8fxj |
Created |
Created container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-j8fxj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" in 2.099s (2.099s including waiting) | |
openshift-monitoring |
kubelet |
node-exporter-h2gfv |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-ppw7h |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-w5svb |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
kube-state-metrics-598b4cb887-xkxn7 |
Started |
Started container kube-state-metrics | |
openshift-monitoring |
kubelet |
node-exporter-w5svb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" in 2.87s (2.87s including waiting) | |
openshift-monitoring |
kubelet |
node-exporter-w5svb |
Created |
Created container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-ppw7h |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-lxgj9 |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-h2gfv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-lxgj9 |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-lxgj9 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-ppw7h |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-ppw7h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-j8fxj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-j8fxj |
Created |
Created container node-exporter | |
openshift-monitoring |
kubelet |
kube-state-metrics-598b4cb887-xkxn7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b9279ce7b6bdf993e9b37924ece65982b806475f0a633ef2eae1c5c960e5e1d" in 2.783s (2.783s including waiting) | |
openshift-monitoring |
kubelet |
kube-state-metrics-598b4cb887-xkxn7 |
Created |
Created container kube-state-metrics | |
openshift-monitoring |
kubelet |
node-exporter-lxgj9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-598b4cb887-xkxn7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-598b4cb887-xkxn7 |
Created |
Created container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
node-exporter-lxgj9 |
Created |
Created container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-lxgj9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-lxgj9 |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-j8fxj |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-lxgj9 |
Created |
Created container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-ppw7h |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-j8fxj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-j8fxj |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-j8fxj |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
openshift-state-metrics-86886ccdb8-6v5s2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa7ef88f868ac71e4ef7fddd109c920efaece57f089e237830b24b9c256c8ba4" in 2.819s (2.819s including waiting) | |
openshift-monitoring |
kubelet |
node-exporter-ppw7h |
Created |
Created container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-ppw7h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-86886ccdb8-6v5s2 |
Created |
Created container openshift-state-metrics | |
openshift-monitoring |
kubelet |
openshift-state-metrics-86886ccdb8-6v5s2 |
Started |
Started container openshift-state-metrics | |
openshift-monitoring |
kubelet |
node-exporter-h2gfv |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
kube-state-metrics-598b4cb887-xkxn7 |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
node-exporter-h2gfv |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-h2gfv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-h2gfv |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-h2gfv |
Created |
Created container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-4xjkt |
Created |
Created container init-textfile | |
| (x2) | openshift-dns |
endpoint-slice-controller |
dns-default |
TopologyAwareHintsEnabled |
Topology Aware Hints has been enabled, addressType: IPv4 |
openshift-monitoring |
kubelet |
node-exporter-4xjkt |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-w5svb |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-w5svb |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-w5svb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-w5svb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-4xjkt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" in 4.278s (4.278s including waiting) | |
openshift-monitoring |
kubelet |
kube-state-metrics-598b4cb887-xkxn7 |
Created |
Created container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
kube-state-metrics-598b4cb887-xkxn7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-598b4cb887-xkxn7 |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-w5svb |
Created |
Created container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-w5svb |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-4xjkt |
Created |
Created container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-4xjkt |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-4xjkt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-4xjkt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9eba2f769468893fb6bd7407847653fca7153da88e00ef8d68af2dd5a3d28e7" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-4xjkt |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-4xjkt |
Started |
Started container kube-rbac-proxy | |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-scheduler-cert-syncer |
openshift-monitoring |
replicaset-controller |
metrics-server-66666c5bf |
SuccessfulCreate |
Created pod: metrics-server-66666c5bf-5b985 | |
openshift-monitoring |
kubelet |
metrics-server-66666c5bf-2k6dh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a174cf59840331f2295deb5660fa1b584671c086c1ad64ce572cc5aad54c50" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-9a6ldng0krfu1 -n openshift-monitoring because it was missing | |
openshift-monitoring |
multus |
metrics-server-66666c5bf-5b985 |
AddedInterface |
Add eth0 [10.128.2.11/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
metrics-server-66666c5bf-5b985 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a174cf59840331f2295deb5660fa1b584671c086c1ad64ce572cc5aad54c50" | |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-scheduler-cert-syncer |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine |
openshift-monitoring |
multus |
metrics-server-66666c5bf-2k6dh |
AddedInterface |
Add eth0 [10.131.0.11/23] from ovn-kubernetes | |
openshift-monitoring |
replicaset-controller |
metrics-server-66666c5bf |
SuccessfulCreate |
Created pod: metrics-server-66666c5bf-2k6dh | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-66666c5bf to 2 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:57:59.941259 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:09.940822 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:19.940980 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:29.941484 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.940922 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.941659 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:58:39.941689 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready") | |
openshift-kube-controller-manager |
static-pod-installer |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 7 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container guard | |
openshift-monitoring |
kubelet |
metrics-server-66666c5bf-2k6dh |
Started |
Started container metrics-server | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-controller-manager | |
openshift-monitoring |
kubelet |
metrics-server-66666c5bf-5b985 |
Started |
Started container metrics-server | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-controller-manager-cert-syncer | |
openshift-kube-scheduler |
multus |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.60/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
metrics-server-66666c5bf-2k6dh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a174cf59840331f2295deb5660fa1b584671c086c1ad64ce572cc5aad54c50" in 2.66s (2.66s including waiting) | |
openshift-monitoring |
kubelet |
metrics-server-66666c5bf-5b985 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a174cf59840331f2295deb5660fa1b584671c086c1ad64ce572cc5aad54c50" in 2.537s (2.537s including waiting) | |
openshift-monitoring |
kubelet |
metrics-server-66666c5bf-2k6dh |
Created |
Created container metrics-server | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:57:59.941259 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:09.940822 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:19.940980 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:29.941484 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.940922 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.941659 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:58:39.941689 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:57:59.941259 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:09.940822 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:19.940980 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:29.941484 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.940922 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.941659 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:58:39.941689 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 container \"kube-scheduler-cert-syncer\" is terminated: Error: 9.1/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0611 11:00:30.697705 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: W0611 11:00:33.307678 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0611 11:00:33.307748 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: W0611 11:01:26.353360 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0611 11:01:26.353426 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: F0611 11:01:27.382663 1 base_controller.go:96] unable to sync caches for CertSyncController\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-controller-manager-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container guard | |
openshift-monitoring |
kubelet |
metrics-server-66666c5bf-5b985 |
Created |
Created container metrics-server | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:57:59.941259 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:09.940822 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:19.940980 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:29.941484 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.940922 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.941659 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:58:39.941689 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 container \"kube-scheduler-cert-syncer\" is terminated: Error: 9.1/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0611 11:00:30.697705 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: W0611 11:00:33.307678 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0611 11:00:33.307748 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: W0611 11:01:26.353360 1 reflector.go:539] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E0611 11:01:26.353426 1 reflector.go:147] k8s.io/client-go@v0.29.1/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: F0611 11:01:27.382663 1 base_controller.go:96] unable to sync caches for CertSyncController\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:57:59.941259 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:09.940822 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:19.940980 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:29.941484 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.940922 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.941659 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:58:39.941689 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodUpdated |
Updated Pod/openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-scheduler because it changed | |
| (x4) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ProbeError |
Readiness probe error: Get "https://10.0.0.8:10257/healthz": dial tcp 10.0.0.8:10257: connect: connection refused body: |
| (x14) | openshift-ovn-kubernetes |
kubelet |
ovnkube-node-fh4k2 |
BackOff |
Back-off restarting failed container drop-icmp in pod ovnkube-node-fh4k2_openshift-ovn-kubernetes(7fb073d1-2e40-4979-bd1c-cdde8252b91e) |
| (x4) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.8:10257/healthz": dial tcp 10.0.0.8:10257: connect: connection refused |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89206cb191ea89871d18b482edd9417d13327fab7091ed43293046345c80c3d7" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-0_471fce2f-74ab-41d9-9fc9-d1deb857f4cc became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-0_b4b88974-7ff9-4ed8-a089-ff9174c0d0e2 became leader | |
openshift-dns |
endpoint-slice-controller |
dns-default |
TopologyAwareHintsEnabled |
Topology Aware Hints has been enabled, addressType: IPv4 | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-0 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-0 in Controller | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-2_79d43ead-10da-4e92-8eaa-b09b5196057c became leader | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-2 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-2 in Controller | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 in Controller | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-1 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-1 in Controller | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 in Controller | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp event: Registered Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp in Controller | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container etcdctl | |
openshift-etcd |
static-pod-installer |
installer-7-retry-2-ci-op-9xx71rvq-1e28e-w667k-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 7 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 5 to 7 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 5; 1 node is at revision 6; 0 nodes have achieved new revision 7" to "NodeInstallerProgressing: 1 node is at revision 5; 1 node is at revision 6; 1 node is at revision 7",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 node is at revision 6; 0 nodes have achieved new revision 7" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 5; 1 node is at revision 6; 1 node is at revision 7" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
static-pod-installer |
installer-7-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 7 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-1" from revision 5 to 7 because node ci-op-9xx71rvq-1e28e-w667k-master-1 with revision 5 is the oldest | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
APIServiceCreated |
Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
multus |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.64/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/metrics-server -n openshift-monitoring because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container installer | |
openshift-kube-controller-manager |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container installer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:57:59.941259 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:09.940822 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:19.940980 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:29.941484 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.940922 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:39.941659 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:58:39.941689 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 5; 1 node is at revision 6" to "NodeInstallerProgressing: 1 node is at revision 5; 2 nodes are at revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 5; 1 node is at revision 6" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 5; 2 nodes are at revision 6" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-2" from revision 0 to 6 because static pod is ready | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 5 to 6 because node ci-op-9xx71rvq-1e28e-w667k-master-0 with revision 5 is the oldest | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-6-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
| (x20) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-scheduler |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-kube-scheduler |
kubelet |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-kube-scheduler |
multus |
installer-6-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.54/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x11) | openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.8:9980/readyz": dial tcp 10.0.0.8:9980: connect: connection refused |
openshift-etcd |
endpoint-controller |
etcd |
FailedToUpdateEndpoint |
Failed to update endpoint openshift-etcd/etcd: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container setup | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-5b9b5c7f89-z28dx_77859cf5-ef3a-4bfa-be7e-453e127d075a became leader | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
authentication-operator |
FastControllerResync |
Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 3 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcd-metrics | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-cert-syncer\" is terminated: Error: resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0611 11:01:36.959923 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0611 11:01:36.959973 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0611 11:02:17.482829 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0611 11:02:17.482901 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0611 11:02:20.579286 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0611 11:02:20.579365 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: F0611 11:02:33.216726 1 base_controller.go:96] unable to sync caches for CertSyncController\nStaticPodsDegraded: " | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 container \"kube-controller-manager-cert-syncer\" is terminated: Error: resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0611 11:01:36.959923 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0611 11:01:36.959973 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0611 11:02:17.482829 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0611 11:02:17.482901 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0611 11:02:20.579286 1 reflector.go:539] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0611 11:02:20.579365 1 reflector.go:147] k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: F0611 11:02:33.216726 1 base_controller.go:96] unable to sync caches for CertSyncController\nStaticPodsDegraded: " to "NodeControllerDegraded: All master nodes are ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") | |
openshift-network-diagnostics |
kubelet |
network-check-target-qp2gp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:74a88136c1f22a00a7ffee265c05f3e0101ba89a3b297e2027fcc9d53230b6a1" already present on machine | |
openshift-network-diagnostics |
multus |
network-check-target-qp2gp |
AddedInterface |
Add eth0 [10.129.2.5/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
network-metrics-daemon-xcz98 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20e7cc0c70dfe15e9bffb19cd84ad691a2f536c75032e323be137e932e06021a" | |
openshift-multus |
multus |
network-metrics-daemon-xcz98 |
AddedInterface |
Add eth0 [10.129.2.4/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Unhealthy |
Startup probe failed: Get "https://10.0.0.8:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ProbeError |
Startup probe error: Get "https://10.0.0.8:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5; 0 nodes have achieved new revision 7\nEtcdMembersAvailable: 4 members are available" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5; 0 nodes have achieved new revision 7\nEtcdMembersAvailable: 3 of 4 members are available, ci-op-9xx71rvq-1e28e-w667k-master-0 is unhealthy" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 7:\nNodeInstallerDegraded: installer: 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:57:51.967859 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:01.968030 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:11.967460 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:21.968015 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:31.968088 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:41.967791 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:41.968659 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:58:41.968692 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "EtcdMembersDegraded: 3 of 4 members are available, ci-op-9xx71rvq-1e28e-w667k-master-0 is unhealthy\nNodeInstallerDegraded: 1 nodes are failing on revision 7:\nNodeInstallerDegraded: installer: 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:57:51.967859 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:01.968030 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:11.967460 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:21.968015 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:31.968088 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:41.967791 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:41.968659 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:58:41.968692 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-multus |
kubelet |
network-metrics-daemon-p98p7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20e7cc0c70dfe15e9bffb19cd84ad691a2f536c75032e323be137e932e06021a" | |
openshift-multus |
multus |
network-metrics-daemon-p98p7 |
AddedInterface |
Add eth0 [10.128.2.4/23] from ovn-kubernetes | |
| (x3) | openshift-network-operator |
kubelet |
iptables-alerter-888zr |
BackOff |
Back-off restarting failed container iptables-alerter in pod iptables-alerter-888zr_openshift-network-operator(706dce5a-a175-44ba-98ad-70322e36866d) |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 1 to 7 because static pod is ready | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 3 of 4 members are available, ci-op-9xx71rvq-1e28e-w667k-master-0 is unhealthy"),Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5; 0 nodes have achieved new revision 7" to "NodeInstallerProgressing: 1 node is at revision 3; 1 node is at revision 5; 1 node is at revision 7",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5; 0 nodes have achieved new revision 7\nEtcdMembersAvailable: 3 of 4 members are available, ci-op-9xx71rvq-1e28e-w667k-master-0 is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 3; 1 node is at revision 5; 1 node is at revision 7\nEtcdMembersAvailable: 3 of 4 members are available, ci-op-9xx71rvq-1e28e-w667k-master-0 is unhealthy" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-1" from revision 3 to 7 because node ci-op-9xx71rvq-1e28e-w667k-master-1 with revision 3 is the oldest | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-etcd because it was missing | |
| (x5) | openshift-ingress-operator |
kubelet |
ingress-operator-66bb9945d4-25hsj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2086171405832d77db9abba287eaf6ec94d517ad8d8056a31b5b75dc2c421162" already present on machine |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
| (x3) | openshift-network-operator |
kubelet |
iptables-alerter-888zr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" already present on machine |
| (x4) | openshift-network-operator |
kubelet |
iptables-alerter-888zr |
Created |
Created container iptables-alerter |
| (x4) | openshift-network-operator |
kubelet |
iptables-alerter-888zr |
Started |
Started container iptables-alerter |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container kube-controller-manager |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container setup | |
| (x13) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
InstallerPodFailed |
Failed to create installer pod for revision 7 count 0 on node "ci-op-9xx71rvq-1e28e-w667k-master-1": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1": dial tcp 172.30.0.1:443: connect: connection refused |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine |
| (x2) | openshift-etcd |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : failed to fetch token: Post "https://api-int.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com:6443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa/token": dial tcp 10.0.0.4:6443: i/o timeout |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-scheduler-cert-syncer |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-scheduler-cert-syncer |
| (x82) | openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
KubeAPIReadyz |
readyz=true | |
| (x43) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
| (x44) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller |
etcd-operator |
EtcdCertSignerControllerUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-0_798a7dd6-6756-4e8f-a04a-96126a9a633f became leader | |
openshift-etcd |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
multus |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.65/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container installer | |
openshift-network-node-identity |
kubelet |
network-node-identity-xl5tj |
BackOff |
Back-off restarting failed container approver in pod network-node-identity-xl5tj_openshift-network-node-identity(389945c2-9545-4fff-ad8d-832758350bd0) | |
openshift-network-node-identity |
ci-op-9xx71rvq-1e28e-w667k-master-0_b3136bda-ba96-4901-a7ab-8bb476b89d25 |
ovnkube-identity |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-0_b3136bda-ba96-4901-a7ab-8bb476b89d25 became leader | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp event: Registered Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp in Controller | |
openshift-dns |
endpoint-slice-controller |
dns-default |
TopologyAwareHintsEnabled |
Topology Aware Hints has been enabled, addressType: IPv4 | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 in Controller | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-1 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-1 in Controller | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-0_ed446c2b-864d-441e-9782-afeb73f2ac13 became leader | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-2 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-2 in Controller | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 in Controller | |
default |
node-controller |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
RegisteredNode |
Node ci-op-9xx71rvq-1e28e-w667k-master-0 event: Registered Node ci-op-9xx71rvq-1e28e-w667k-master-0 in Controller | |
| (x2) | openshift-network-operator |
kubelet |
iptables-alerter-vfv6g |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" already present on machine |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x10) | openshift-marketplace |
kubelet |
marketplace-operator-867c6b6ccc-rmltl |
BackOff |
Back-off restarting failed container marketplace-operator in pod marketplace-operator-867c6b6ccc-rmltl_openshift-marketplace(53f1a3bf-60ed-4e00-9b17-67b1a3e712dd) |
| (x2) | openshift-network-operator |
kubelet |
iptables-alerter-vfv6g |
Created |
Created container iptables-alerter |
| (x3) | openshift-network-node-identity |
kubelet |
network-node-identity-xl5tj |
Created |
Created container approver |
| (x2) | openshift-network-operator |
kubelet |
iptables-alerter-vfv6g |
Started |
Started container iptables-alerter |
| (x3) | openshift-network-node-identity |
kubelet |
network-node-identity-xl5tj |
Started |
Started container approver |
| (x3) | openshift-network-node-identity |
kubelet |
network-node-identity-xl5tj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f51f47793a3eda34f600e1e7eab027bc309b914eb8ea948765cf1a03549b34e4" already present on machine |
| (x2) | openshift-network-operator |
kubelet |
iptables-alerter-hpmwj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:685c1ecb542461771adb7ed00ff73f21046cfacb3f65e656b4168cb6cc0e1dcd" already present on machine |
| (x2) | openshift-network-operator |
kubelet |
iptables-alerter-hpmwj |
Started |
Started container iptables-alerter |
| (x2) | openshift-network-operator |
kubelet |
iptables-alerter-hpmwj |
Created |
Created container iptables-alerter |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container etcd | |
openshift-etcd |
static-pod-installer |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 7 | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container etcd-metrics | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-1_f7788920-85a3-4ced-b407-e3d4c9fd2a26 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:40.056541 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:50.052606 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:00.052110 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:10.052879 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:20.051639 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:20.052502 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 11:03:20.052533 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:40.056541 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:50.052606 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:00.052110 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:10.052879 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:20.051639 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:20.052502 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 11:03:20.052533 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 container \"kube-scheduler\" is terminated: Error: 29\nStaticPodsDegraded: I0611 11:04:46.868824 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:46.929608 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:46.981425 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.047155 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.075669 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.109733 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.240812 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.297462 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.330244 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.335251 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.346175 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.377346 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:05:06.582629 1 leaderelection.go:285] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nStaticPodsDegraded: E0611 11:05:06.591407 1 server.go:252] \"Leaderelection lost\"\nStaticPodsDegraded: I0611 11:05:06.591496 1 scheduling_queue.go:870] \"Scheduling queue is closed\"\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:40.056541 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:50.052606 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:00.052110 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:10.052879 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:20.051639 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:20.052502 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 11:03:20.052533 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
InstallerPodFailed |
installer errors: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 11:02:40.056541 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 11:02:50.052606 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 11:03:00.052110 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 11:03:10.052879 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 11:03:20.051639 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 11:03:20.052502 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused F0611 11:03:20.052533 1 cmd.go:106] timed out waiting for the condition | |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
BackOff |
Back-off restarting failed container kube-scheduler in pod openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0_openshift-kube-scheduler(2b9a08053c55e258a76335101c72ecbc) |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 0 to 7 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 7:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:08.460733 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:18.460770 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:28.461125 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:38.461572 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:48.460760 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 10:58:48.461504 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 10:58:48.461542 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: " to "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]",Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 7" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 7",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 7") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:40.056541 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:50.052606 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:00.052110 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:10.052879 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:20.051639 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:20.052502 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 11:03:20.052533 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 container \"kube-scheduler\" is terminated: Error: 29\nStaticPodsDegraded: I0611 11:04:46.868824 1 reflector.go:351] Caches populated for *v1.StatefulSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:46.929608 1 reflector.go:351] Caches populated for *v1.PodDisruptionBudget from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:46.981425 1 reflector.go:351] Caches populated for *v1.ReplicaSet from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.047155 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.075669 1 reflector.go:351] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.109733 1 reflector.go:351] Caches populated for *v1.PersistentVolume from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.240812 1 reflector.go:351] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.297462 1 reflector.go:351] Caches populated for *v1.Service from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.330244 1 reflector.go:351] Caches populated for *v1.StorageClass from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.335251 1 reflector.go:351] Caches populated for *v1.Node from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.346175 1 reflector.go:351] Caches populated for *v1.ReplicationController from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:04:47.377346 1 reflector.go:351] Caches populated for *v1.CSINode from k8s.io/client-go@v0.29.0/tools/cache/reflector.go:229\nStaticPodsDegraded: I0611 11:05:06.582629 1 leaderelection.go:285] failed to renew lease openshift-kube-scheduler/kube-scheduler: timed out waiting for the condition\nStaticPodsDegraded: E0611 11:05:06.591407 1 server.go:252] \"Leaderelection lost\"\nStaticPodsDegraded: I0611 11:05:06.591496 1 scheduling_queue.go:870] \"Scheduling queue is closed\"\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:40.056541 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:50.052606 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:00.052110 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:10.052879 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:20.051639 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:20.052502 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 11:03:20.052533 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 container \"kube-scheduler\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-scheduler pod=openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0_openshift-kube-scheduler(2b9a08053c55e258a76335101c72ecbc)\nNodeControllerDegraded: All master nodes are ready" | |
| (x12) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.8:10259/healthz": dial tcp 10.0.0.8:10259: connect: connection refused |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-1" from revision 0 to 7 because node ci-op-9xx71rvq-1e28e-w667k-master-1 static pod not found | |
| (x14) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ProbeError |
Readiness probe error: Get "https://10.0.0.8:10259/healthz": dial tcp 10.0.0.8:10259: connect: connection refused body: |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 7:\nNodeInstallerDegraded: installer: i-op-9xx71rvq-1e28e-w667k-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:31.404102 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:41.402998 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:51.403092 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:01.403769 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:11.402960 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:11.404078 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 11:03:11.404123 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
installer errors: installer: i-op-9xx71rvq-1e28e-w667k-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 11:02:31.404102 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 11:02:41.402998 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 11:02:51.403092 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 11:03:01.403769 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 11:03:11.402960 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W0611 11:03:11.404078 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused F0611 11:03:11.404123 1 cmd.go:105] timed out waiting for the condition | |
| (x3) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-scheduler |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-scheduler because it was missing | |
| (x3) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine |
| (x3) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-scheduler |
openshift-kube-scheduler |
kubelet |
installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-kube-scheduler |
multus |
installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.55/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
multus |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.66/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container setup | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:40.056541 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:50.052606 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:00.052110 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:10.052879 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:20.051639 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:20.052502 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 11:03:20.052533 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 container \"kube-scheduler\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-scheduler pod=openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0_openshift-kube-scheduler(2b9a08053c55e258a76335101c72ecbc)\nNodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:40.056541 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:50.052606 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:00.052110 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:10.052879 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:20.051639 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:20.052502 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 11:03:20.052533 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-apiserver |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container installer | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcdctl | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcd-readyz | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 3 of 4 members are available, ci-op-9xx71rvq-1e28e-w667k-master-0 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 3; 1 node is at revision 5; 1 node is at revision 7\nEtcdMembersAvailable: 3 of 4 members are available, ci-op-9xx71rvq-1e28e-w667k-master-0 is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 3; 1 node is at revision 5; 1 node is at revision 7\nEtcdMembersAvailable: 4 members are available" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-7-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-7-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-controller-manager |
multus |
installer-7-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.67/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-7-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container installer | |
openshift-kube-controller-manager |
kubelet |
installer-7-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container installer | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-1" from revision 3 to 7 because static pod is ready | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 3; 1 node is at revision 5; 1 node is at revision 7" to "NodeInstallerProgressing: 1 node is at revision 5; 2 nodes are at revision 7",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 3; 1 node is at revision 5; 1 node is at revision 7\nEtcdMembersAvailable: 4 members are available" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 5; 2 nodes are at revision 7\nEtcdMembersAvailable: 4 members are available" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
static-pod-installer |
installer-6-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-2" from revision 5 to 7 because node ci-op-9xx71rvq-1e28e-w667k-master-2 with revision 5 is the oldest | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-scheduler | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-etcd because it was missing | |
openshift-etcd |
multus |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.61/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container installer | |
| (x8) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine |
openshift-kube-apiserver |
static-pod-installer |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 7 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "GuardControllerDegraded: [Missing PodIP in operand kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-controller-manager |
static-pod-installer |
installer-7-retry-1-ci-op-9xx71rvq-1e28e-w667k-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 7 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-scheduler | |
| (x19) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
ProbeError |
Readiness probe error: Get "https://10.0.0.6:10257/healthz": dial tcp 10.0.0.6:10257: connect: connection refused body: |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-kube-scheduler |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-0_f260dfca-e3bc-4764-820f-9e918a6eb6d9 became leader | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 on node ci-op-9xx71rvq-1e28e-w667k-master-1, Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2]" to "GuardControllerDegraded: Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.68/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container guard | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
authentication-operator |
Created <unknown>/v1.oauth.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.apps.openshift.io because it was missing | ||
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container guard | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.project.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberRemove |
removed member with ID: 11857714448295288924 | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
authentication-operator |
Created <unknown>/v1.user.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
authentication-operator |
OpenShiftAPICheckFailed |
"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
authentication-operator |
OpenShiftAPICheckFailed |
"user.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.authorization.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.build.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.image.openshift.io because it was missing | ||
openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-d2ds7 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 | |
openshift-oauth-apiserver |
kubelet |
apiserver-f74744fc5-d2ds7 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [-]etcd-readiness failed: reason withheld [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [+]shutdown ok readyz check failed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodUpdated |
Updated Pod/kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-apiserver because it changed | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container cluster-policy-controller | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.template.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.quota.openshift.io because it was missing | ||
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89206cb191ea89871d18b482edd9417d13327fab7091ed43293046345c80c3d7" already present on machine | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.security.openshift.io because it was missing | ||
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-controller-manager | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 7" to "NodeInstallerProgressing: 1 node is at revision 0; 2 nodes are at revision 7",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 7" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 7" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-1" from revision 0 to 7 because static pod is ready | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.route.openshift.io because it was missing | ||
openshift-kube-controller-manager |
endpoint-controller |
kube-controller-manager |
FailedToUpdateEndpoint |
Failed to update endpoint openshift-kube-controller-manager/kube-controller-manager: Operation cannot be fulfilled on endpoints "kube-controller-manager": the object has been modified; please apply your changes to the latest version and try again | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well") | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-1 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Killing |
Stopping container etcdctl | |
openshift-etcd |
static-pod-installer |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 7 | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
ProbeError |
Readiness probe error: Get "https://10.0.0.7:9980/readyz": dial tcp 10.0.0.7:9980: connect: connection refused body: | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Killing |
Stopping container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Killing |
Stopping container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Killing |
Stopping container etcd-metrics | |
| (x2) | openshift-ingress |
kubelet |
router-default-7c66d9f4d8-wk77v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:48910918f7c73a9f9ad6490fcead5fae8c17ab3e32beb778627c3dcbc8e3387c" already present on machine |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-1" from revision 5 to 7 because static pod is ready | |
| (x2) | openshift-ingress |
kubelet |
router-default-7c66d9f4d8-hjjcl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:48910918f7c73a9f9ad6490fcead5fae8c17ab3e32beb778627c3dcbc8e3387c" already present on machine |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 7:\nNodeInstallerDegraded: installer: i-op-9xx71rvq-1e28e-w667k-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:31.404102 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:41.402998 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:51.403092 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:01.403769 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:11.402960 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:11.404078 1 cmd.go:466] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 11:03:11.404123 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 5; 1 node is at revision 6; 1 node is at revision 7" to "NodeInstallerProgressing: 1 node is at revision 6; 2 nodes are at revision 7",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 5; 1 node is at revision 6; 1 node is at revision 7" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 6; 2 nodes are at revision 7" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-prunecontroller |
kube-controller-manager-operator |
PodCreated |
Created Pod/revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console namespace | |
openshift-kube-controller-manager |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container pruner | |
openshift-kube-controller-manager |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container pruner | |
openshift-kube-controller-manager |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-controller-manager |
multus |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.56/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-user-settings namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-operator namespace | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-prunecontroller |
kube-controller-manager-operator |
PodCreated |
Created Pod/revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container pruner | |
openshift-kube-controller-manager |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container pruner | |
openshift-kube-controller-manager |
multus |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.69/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-controller-manager |
multus |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.62/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-prunecontroller |
kube-controller-manager-operator |
PodCreated |
Created Pod/revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-controller-manager because it was missing | |
openshift-console-operator |
replicaset-controller |
console-operator-77db45cb75 |
SuccessfulCreate |
Created pod: console-operator-77db45cb75-pkp8c | |
openshift-console-operator |
deployment-controller |
console-conversion-webhook |
ScalingReplicaSet |
Scaled up replica set console-conversion-webhook-7769b66855 to 1 | |
openshift-console-operator |
default-scheduler |
console-conversion-webhook-7769b66855-gl8cc |
Scheduled |
Successfully assigned openshift-console-operator/console-conversion-webhook-7769b66855-gl8cc to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-console-operator |
deployment-controller |
console-operator |
ScalingReplicaSet |
Scaled up replica set console-operator-77db45cb75 to 1 | |
openshift-kube-controller-manager |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container pruner | |
openshift-kube-controller-manager |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container pruner | |
openshift-console-operator |
replicaset-controller |
console-conversion-webhook-7769b66855 |
SuccessfulCreate |
Created pod: console-conversion-webhook-7769b66855-gl8cc | |
openshift-console-operator |
default-scheduler |
console-operator-77db45cb75-pkp8c |
Scheduled |
Successfully assigned openshift-console-operator/console-operator-77db45cb75-pkp8c to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-console-operator |
multus |
console-operator-77db45cb75-pkp8c |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-console-operator |
kubelet |
console-operator-77db45cb75-pkp8c |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b07cbd2fab2082bd0ac62169e53d12fa6ab017b27bdf37c97e82aa559828c8d2" | |
openshift-console-operator |
multus |
console-conversion-webhook-7769b66855-gl8cc |
AddedInterface |
Add eth0 [10.128.0.57/23] from ovn-kubernetes | |
openshift-console-operator |
kubelet |
console-conversion-webhook-7769b66855-gl8cc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b07cbd2fab2082bd0ac62169e53d12fa6ab017b27bdf37c97e82aa559828c8d2" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-2" from revision 6 to 7 because node ci-op-9xx71rvq-1e28e-w667k-master-2 with revision 6 is the oldest | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-console-operator |
kubelet |
console-operator-77db45cb75-pkp8c |
Created |
Created container console-operator | |
openshift-console-operator |
kubelet |
console-operator-77db45cb75-pkp8c |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b07cbd2fab2082bd0ac62169e53d12fa6ab017b27bdf37c97e82aa559828c8d2" in 3.654s (3.654s including waiting) | |
openshift-console-operator |
kubelet |
console-conversion-webhook-7769b66855-gl8cc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b07cbd2fab2082bd0ac62169e53d12fa6ab017b27bdf37c97e82aa559828c8d2" in 3.67s (3.67s including waiting) | |
openshift-monitoring |
default-scheduler |
monitoring-plugin-75c9c44bd5-2qdlt |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-75c9c44bd5-2qdlt to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | |
openshift-console |
deployment-controller |
downloads |
ScalingReplicaSet |
Scaled up replica set downloads-7d87f9854d to 2 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-controller-manager because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorVersionChanged |
clusteroperator/console version "operator" changed from "" to "4.16.0-0.nightly-2024-06-10-211334" | |
openshift-console-operator |
console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentCreated |
Created Deployment.apps/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/console -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-health-check-controller-healthcheckcontroller |
console-operator |
FastControllerResync |
Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling | |
openshift-console-operator |
console-operator |
console-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.16.0-0.nightly-2024-06-10-211334"}] | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" | |
| (x2) | openshift-console |
controllermanager |
console |
NoPods |
No matching pods found |
openshift-console-operator |
kubelet |
console-conversion-webhook-7769b66855-gl8cc |
Created |
Created container conversion-webhook-server | |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-75c9c44bd5 |
SuccessfulCreate |
Created pod: monitoring-plugin-75c9c44bd5-2qdlt | |
openshift-console-operator |
kubelet |
console-conversion-webhook-7769b66855-gl8cc |
Started |
Started container conversion-webhook-server | |
| (x2) | openshift-console |
controllermanager |
downloads |
NoPods |
No matching pods found |
openshift-console |
replicaset-controller |
downloads-7d87f9854d |
SuccessfulCreate |
Created pod: downloads-7d87f9854d-rlxjj | |
openshift-monitoring |
multus |
monitoring-plugin-75c9c44bd5-2qdlt |
AddedInterface |
Add eth0 [10.129.2.13/23] from ovn-kubernetes | |
openshift-console |
default-scheduler |
downloads-7d87f9854d-rlxjj |
Scheduled |
Successfully assigned openshift-console/downloads-7d87f9854d-rlxjj to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | |
openshift-monitoring |
default-scheduler |
monitoring-plugin-75c9c44bd5-pnrvb |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-75c9c44bd5-pnrvb to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | |
openshift-monitoring |
multus |
monitoring-plugin-75c9c44bd5-pnrvb |
AddedInterface |
Add eth0 [10.131.0.12/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
monitoring-plugin-75c9c44bd5-pnrvb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e916b5de7af0f8bd07324259867f42b1f21e7a5d6445d06c0f0d857a6a084acf" | |
openshift-console-operator |
console-operator |
console-operator-lock |
LeaderElection |
console-operator-77db45cb75-pkp8c_835777d2-45a7-40d2-866a-3e3370f43462 became leader | |
openshift-console |
replicaset-controller |
downloads-7d87f9854d |
SuccessfulCreate |
Created pod: downloads-7d87f9854d-v9g6r | |
openshift-console-operator |
kubelet |
console-operator-77db45cb75-pkp8c |
Started |
Started container console-operator | |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-75c9c44bd5 |
SuccessfulCreate |
Created pod: monitoring-plugin-75c9c44bd5-pnrvb | |
openshift-console |
default-scheduler |
downloads-7d87f9854d-v9g6r |
Scheduled |
Successfully assigned openshift-console/downloads-7d87f9854d-v9g6r to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | |
openshift-monitoring |
deployment-controller |
monitoring-plugin |
ScalingReplicaSet |
Scaled up replica set monitoring-plugin-75c9c44bd5 to 2 | |
openshift-image-registry |
deployment-controller |
image-registry |
ScalingReplicaSet |
Scaled up replica set image-registry-87fbfc4db to 2 | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-9fvnn | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-k8tr5 | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-ldzgh | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-qnf6s | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/console -n openshift-console because it was missing | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DaemonSetCreated |
Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveInternalRegistryHostnameChanged |
Internal registry hostname changed to "image-registry.openshift-image-registry.svc:5000" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "cloud-config": []any{string("/etc/kubernetes/static-pod-resources/configmaps/cloud-config/clo"...)}, ...}, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, + "imagePolicyConfig": map[string]any{ + "internalRegistryHostname": string("image-registry.openshift-image-registry.svc:5000"), + }, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, } | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-6lgf8 | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-rqp2v | |
openshift-image-registry |
default-scheduler |
node-ca-rqp2v |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-rqp2v to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing | |
openshift-image-registry |
default-scheduler |
node-ca-qnf6s |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-qnf6s to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DeploymentCreated |
Created Deployment.apps/image-registry -n openshift-image-registry because it was missing | |
openshift-image-registry |
default-scheduler |
node-ca-ldzgh |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-ldzgh to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-image-registry |
default-scheduler |
node-ca-k8tr5 |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-k8tr5 to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-image-registry |
default-scheduler |
node-ca-9fvnn |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-9fvnn to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | |
openshift-image-registry |
default-scheduler |
node-ca-6lgf8 |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-6lgf8 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | |
| (x2) | openshift-image-registry |
controllermanager |
image-registry |
NoPods |
No matching pods found |
openshift-image-registry |
replicaset-controller |
image-registry-87fbfc4db |
SuccessfulCreate |
Created pod: image-registry-87fbfc4db-j5gnx | |
openshift-image-registry |
replicaset-controller |
image-registry-87fbfc4db |
SuccessfulCreate |
Created pod: image-registry-87fbfc4db-ps72b | |
openshift-image-registry |
kubelet |
image-registry-87fbfc4db-ps72b |
FailedMount |
MountVolume.SetUp failed for volume "registry-tls" : secret "image-registry-tls" not found | |
openshift-image-registry |
default-scheduler |
image-registry-87fbfc4db-ps72b |
Scheduled |
Successfully assigned openshift-image-registry/image-registry-87fbfc4db-ps72b to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | |
openshift-image-registry |
default-scheduler |
image-registry-87fbfc4db-j5gnx |
Scheduled |
Successfully assigned openshift-image-registry/image-registry-87fbfc4db-j5gnx to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | |
openshift-kube-controller-manager |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-monitoring |
kubelet |
monitoring-plugin-75c9c44bd5-2qdlt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e916b5de7af0f8bd07324259867f42b1f21e7a5d6445d06c0f0d857a6a084acf" | |
openshift-console |
multus |
downloads-7d87f9854d-rlxjj |
AddedInterface |
Add eth0 [10.131.0.13/23] from ovn-kubernetes | |
openshift-console |
kubelet |
downloads-7d87f9854d-rlxjj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:447d77445d22eaa400594e78b989a7fda2b4196f48ee40646e0c556847374572" | |
openshift-kube-controller-manager |
multus |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.63/23] from ovn-kubernetes | |
openshift-console |
multus |
downloads-7d87f9854d-v9g6r |
AddedInterface |
Add eth0 [10.128.2.12/23] from ovn-kubernetes | |
openshift-console |
kubelet |
downloads-7d87f9854d-v9g6r |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:447d77445d22eaa400594e78b989a7fda2b4196f48ee40646e0c556847374572" | |
openshift-image-registry |
kubelet |
node-ca-6lgf8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" | |
openshift-image-registry |
kubelet |
node-ca-qnf6s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" | |
openshift-image-registry |
kubelet |
image-registry-87fbfc4db-ps72b |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/default-ingress-cert -n openshift-console because it was missing | |
openshift-image-registry |
kubelet |
node-ca-k8tr5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" to "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com in route downloads in namespace openshift-console",Upgradeable changed from Unknown to False ("DownloadsDefaultRouteSyncUpgradeable: no ingress for host downloads-openshift-console.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com in route downloads in namespace openshift-console") | |
openshift-image-registry |
default-scheduler |
image-registry-78579cd8f7-zxrg2 |
Scheduled |
Successfully assigned openshift-image-registry/image-registry-78579cd8f7-zxrg2 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | |
openshift-image-registry |
job-controller |
azure-path-fix |
SuccessfulCreate |
Created pod: azure-path-fix-bgvnb | |
openshift-image-registry |
multus |
image-registry-87fbfc4db-ps72b |
AddedInterface |
Add eth0 [10.128.2.13/23] from ovn-kubernetes | |
openshift-image-registry |
deployment-controller |
image-registry |
ScalingReplicaSet |
Scaled down replica set image-registry-87fbfc4db to 1 from 2 | |
openshift-image-registry |
deployment-controller |
image-registry |
ScalingReplicaSet |
Scaled up replica set image-registry-78579cd8f7 to 1 | |
openshift-image-registry |
kubelet |
image-registry-87fbfc4db-j5gnx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" | |
openshift-image-registry |
multus |
image-registry-87fbfc4db-j5gnx |
AddedInterface |
Add eth0 [10.131.0.14/23] from ovn-kubernetes | |
openshift-image-registry |
kubelet |
image-registry-87fbfc4db-j5gnx |
FailedMount |
MountVolume.SetUp failed for volume "registry-tls" : secret "image-registry-tls" not found | |
openshift-console-operator |
console-operator-oauthclient-secret-controller-oauthclientsecretcontroller |
console-operator |
SecretCreated |
Created Secret/console-oauth-config -n openshift-console because it was missing | |
openshift-image-registry |
replicaset-controller |
image-registry-87fbfc4db |
SuccessfulDelete |
Deleted pod: image-registry-87fbfc4db-ps72b | |
openshift-image-registry |
kubelet |
node-ca-ldzgh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" | |
openshift-image-registry |
kubelet |
azure-path-fix-bgvnb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de17441284be3fbe91e2df7e2d46a547a658a327201f9b51b58c70fe54f8378e" | |
openshift-image-registry |
replicaset-controller |
image-registry-78579cd8f7 |
SuccessfulCreate |
Created pod: image-registry-78579cd8f7-zxrg2 | |
openshift-image-registry |
kubelet |
node-ca-rqp2v |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" | |
openshift-kube-controller-manager |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container installer | |
openshift-image-registry |
kubelet |
node-ca-9fvnn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" | |
openshift-image-registry |
multus |
azure-path-fix-bgvnb |
AddedInterface |
Add eth0 [10.129.2.14/23] from ovn-kubernetes | |
openshift-image-registry |
default-scheduler |
azure-path-fix-bgvnb |
Scheduled |
Successfully assigned openshift-image-registry/azure-path-fix-bgvnb to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | |
openshift-kube-controller-manager |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container installer | |
openshift-image-registry |
deployment-controller |
image-registry |
ScalingReplicaSet |
Scaled up replica set image-registry-78579cd8f7 to 2 from 1 | |
openshift-monitoring |
kubelet |
monitoring-plugin-75c9c44bd5-pnrvb |
Started |
Started container monitoring-plugin | |
openshift-monitoring |
kubelet |
monitoring-plugin-75c9c44bd5-2qdlt |
Started |
Started container monitoring-plugin | |
openshift-image-registry |
replicaset-controller |
image-registry-78579cd8f7 |
SuccessfulCreate |
Created pod: image-registry-78579cd8f7-ssfzl | |
openshift-monitoring |
kubelet |
monitoring-plugin-75c9c44bd5-2qdlt |
Created |
Created container monitoring-plugin | |
openshift-monitoring |
kubelet |
monitoring-plugin-75c9c44bd5-2qdlt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e916b5de7af0f8bd07324259867f42b1f21e7a5d6445d06c0f0d857a6a084acf" in 2.517s (2.517s including waiting) | |
openshift-image-registry |
multus |
image-registry-78579cd8f7-zxrg2 |
AddedInterface |
Add eth0 [10.129.2.15/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
monitoring-plugin-75c9c44bd5-pnrvb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e916b5de7af0f8bd07324259867f42b1f21e7a5d6445d06c0f0d857a6a084acf" in 2.436s (2.436s including waiting) | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/image-import-ca -n openshift-apiserver: cause by changes in data.image-registry.openshift-image-registry.svc..5000,data.image-registry.openshift-image-registry.svc.cluster.local..5000 | |
openshift-monitoring |
kubelet |
monitoring-plugin-75c9c44bd5-pnrvb |
Created |
Created container monitoring-plugin | |
openshift-image-registry |
kubelet |
image-registry-78579cd8f7-zxrg2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" | |
| (x2) | openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DaemonSetUpdated |
Updated DaemonSet.apps/node-ca -n openshift-image-registry because it changed |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ClusterAPIInstallAWS", "ClusterAPIInstallNutanix", "ClusterAPIInstallOpenStack", "ClusterAPIInstallVSphere", "DisableKubeletCloudCredentialProviders", "ExternalCloudProvider", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "ExternalCloudProviderGCP", "HardwareSpeed", "KMSv1", "MetricsServer", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "PrivateHostedZoneAWS", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs"}, Disabled:[]v1.FeatureGateName{"AutomatedEtcdBackup", "CSIDriverSharedResource", "ChunkSizeMiB", "ClusterAPIInstall", "ClusterAPIInstallAzure", "ClusterAPIInstallGCP", "ClusterAPIInstallIBMCloud", "ClusterAPIInstallPowerVS", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalRouteCertificate", "GCPClusterHostedDNS", "GCPLabelsTags", "GatewayAPI", "ImagePolicy", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InstallAlternateInfrastructureAWS", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImages", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "NewOLM", "NodeDisruptionPolicy", "NodeSwap", "OnClusterBuild", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "ServiceAccountTokenNodeBindingValidation", "ServiceAccountTokenPodNodeInfo", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "VSphereMultiVCenters", "ValidatingAdmissionPolicy", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "apiServerArguments": map[string]any{"feature-gates": []any{string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AutomatedEtcdBackup=false"), string("AzureWorkloadIdentity=true"), ...}}, + "imagePolicyConfig": map[string]any{ + "internalRegistryHostname": string("image-registry.openshift-image-registry.svc:5000"), + }, "projectConfig": map[string]any{"projectRequestMessage": string("")}, "routingConfig": map[string]any{"subdomain": string("apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com")}, ... // 2 identical entries } | |
openshift-image-registry |
kubelet |
node-ca-qnf6s |
Created |
Created container node-ca | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-zwlsw |
Killing |
Stopping container openshift-apiserver | |
openshift-image-registry |
kubelet |
node-ca-rqp2v |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" in 3.65s (3.65s including waiting) | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-apiserver |
replicaset-controller |
apiserver-575b7cbf5 |
SuccessfulCreate |
Created pod: apiserver-575b7cbf5-rtpck | |
openshift-image-registry |
kubelet |
image-registry-87fbfc4db-ps72b |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" in 2.877s (2.877s including waiting) | |
openshift-image-registry |
kubelet |
node-ca-ldzgh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" in 3.425s (3.425s including waiting) | |
openshift-image-registry |
kubelet |
image-registry-87fbfc4db-ps72b |
Started |
Started container registry | |
openshift-apiserver |
default-scheduler |
apiserver-575b7cbf5-rtpck |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | |
openshift-apiserver |
replicaset-controller |
apiserver-78d6c6c648 |
SuccessfulDelete |
Deleted pod: apiserver-78d6c6c648-zwlsw | |
openshift-image-registry |
kubelet |
node-ca-qnf6s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" in 3.338s (3.338s including waiting) | |
openshift-image-registry |
kubelet |
node-ca-qnf6s |
Started |
Started container node-ca | |
openshift-image-registry |
kubelet |
node-ca-6lgf8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" in 3.675s (3.675s including waiting) | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 3, desired generation is 4.") | |
openshift-image-registry |
kubelet |
image-registry-87fbfc4db-ps72b |
Created |
Created container registry | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-78d6c6c648 to 2 from 3 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-575b7cbf5 to 1 from 0 | |
openshift-image-registry |
kubelet |
node-ca-6lgf8 |
Created |
Created container node-ca | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-2" from revision 0 to 7 because node ci-op-9xx71rvq-1e28e-w667k-master-2 static pod not found | |
openshift-image-registry |
kubelet |
image-registry-87fbfc4db-j5gnx |
Created |
Created container registry | |
openshift-image-registry |
kubelet |
image-registry-87fbfc4db-j5gnx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" in 3.12s (3.12s including waiting) | |
openshift-image-registry |
kubelet |
node-ca-9fvnn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" in 3.916s (3.916s including waiting) | |
openshift-image-registry |
kubelet |
image-registry-87fbfc4db-ps72b |
Killing |
Stopping container registry | |
openshift-image-registry |
kubelet |
node-ca-ldzgh |
Created |
Created container node-ca | |
openshift-image-registry |
kubelet |
node-ca-ldzgh |
Started |
Started container node-ca | |
openshift-image-registry |
kubelet |
node-ca-k8tr5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" in 3.947s (3.947s including waiting) | |
| (x3) | openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-apiserver: cause by changes in data.config.yaml |
openshift-image-registry |
kubelet |
node-ca-k8tr5 |
Started |
Started container node-ca | |
openshift-image-registry |
kubelet |
node-ca-rqp2v |
Started |
Started container node-ca | |
openshift-image-registry |
kubelet |
node-ca-6lgf8 |
Started |
Started container node-ca | |
openshift-image-registry |
kubelet |
node-ca-rqp2v |
Created |
Created container node-ca | |
openshift-image-registry |
kubelet |
image-registry-87fbfc4db-j5gnx |
Started |
Started container registry | |
openshift-image-registry |
kubelet |
node-ca-9fvnn |
Created |
Created container node-ca | |
openshift-image-registry |
kubelet |
node-ca-9fvnn |
Started |
Started container node-ca | |
openshift-image-registry |
kubelet |
node-ca-k8tr5 |
Created |
Created container node-ca | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com in route downloads in namespace openshift-console" to "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com in route downloads in namespace openshift-console\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" | |
openshift-image-registry |
kubelet |
azure-path-fix-bgvnb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de17441284be3fbe91e2df7e2d46a547a658a327201f9b51b58c70fe54f8378e" in 5.126s (5.126s including waiting) | |
openshift-image-registry |
kubelet |
azure-path-fix-bgvnb |
Started |
Started container azure-path-fix | |
openshift-image-registry |
kubelet |
azure-path-fix-bgvnb |
Created |
Created container azure-path-fix | |
openshift-image-registry |
kubelet |
image-registry-78579cd8f7-zxrg2 |
Started |
Started container registry | |
openshift-image-registry |
kubelet |
image-registry-78579cd8f7-zxrg2 |
Created |
Created container registry | |
openshift-image-registry |
kubelet |
image-registry-78579cd8f7-zxrg2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" in 4.419s (4.419s including waiting) | |
openshift-apiserver |
replicaset-controller |
apiserver-9cf8b6f9b |
SuccessfulCreate |
Created pod: apiserver-9cf8b6f9b-mbncp | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-9cf8b6f9b to 1 from 0 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-575b7cbf5 to 0 from 1 | |
openshift-apiserver |
default-scheduler |
apiserver-575b7cbf5-rtpck |
FailedScheduling |
skip schedule deleting pod: openshift-apiserver/apiserver-575b7cbf5-rtpck | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 3, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 3, desired generation is 4.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 3, desired generation is 4.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 4, desired generation is 5.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." | |
openshift-apiserver |
replicaset-controller |
apiserver-575b7cbf5 |
SuccessfulDelete |
Deleted pod: apiserver-575b7cbf5-rtpck | |
| (x4) | openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-apiserver because it changed |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-marketplace |
kubelet |
certified-operators-24rdr |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-pnlz7 |
Killing |
Stopping container registry-server | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-marketplace |
kubelet |
community-operators-bs94f |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-console |
default-scheduler |
console-69d886b-gz7s8 |
Scheduled |
Successfully assigned openshift-console/console-69d886b-gz7s8 to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-marketplace |
multus |
community-operators-bs94f |
AddedInterface |
Add eth0 [10.130.0.65/23] from ovn-kubernetes | |
openshift-marketplace |
default-scheduler |
community-operators-bs94f |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-bs94f to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-marketplace |
kubelet |
community-operators-8x76m |
Killing |
Stopping container registry-server | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-69d886b to 2 | |
openshift-marketplace |
multus |
certified-operators-r7fnh |
AddedInterface |
Add eth0 [10.130.0.64/23] from ovn-kubernetes | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-public -n openshift-config-managed because it was missing | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentCreated |
Created Deployment.apps/console -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-config -n openshift-console because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-r7fnh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-console |
replicaset-controller |
console-69d886b |
SuccessfulCreate |
Created pod: console-69d886b-glfxr | |
openshift-console |
replicaset-controller |
console-69d886b |
SuccessfulCreate |
Created pod: console-69d886b-gz7s8 | |
openshift-marketplace |
default-scheduler |
redhat-marketplace-9qlzk |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-9qlzk to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-console |
default-scheduler |
console-69d886b-glfxr |
Scheduled |
Successfully assigned openshift-console/console-69d886b-glfxr to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-marketplace |
default-scheduler |
certified-operators-r7fnh |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-r7fnh to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-marketplace |
kubelet |
community-operators-bs94f |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.16" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 5 to 6 because static pod is ready | |
openshift-console |
kubelet |
console-69d886b-glfxr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49c23c640e34ad7886eefc489ccbe4e1d15ab63c3bbd9e1ed2acf73aef3ecb2c" | |
openshift-console |
multus |
console-69d886b-glfxr |
AddedInterface |
Add eth0 [10.128.0.59/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 8 triggered by "required configmap/config has changed" | |
openshift-marketplace |
kubelet |
redhat-operators-ddg4k |
Killing |
Stopping container registry-server | |
openshift-marketplace |
multus |
redhat-marketplace-9qlzk |
AddedInterface |
Add eth0 [10.130.0.67/23] from ovn-kubernetes | |
openshift-image-registry |
job-controller |
azure-path-fix |
Completed |
Job completed | |
openshift-marketplace |
kubelet |
redhat-marketplace-9qlzk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-9qlzk |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-r7fnh |
Started |
Started container extract-utilities | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-zwlsw |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 4, desired generation is 5." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-marketplace |
kubelet |
redhat-marketplace-9qlzk |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-9qlzk |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.16" | |
openshift-console |
kubelet |
console-69d886b-gz7s8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49c23c640e34ad7886eefc489ccbe4e1d15ab63c3bbd9e1ed2acf73aef3ecb2c" | |
openshift-marketplace |
kubelet |
certified-operators-r7fnh |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.16" | |
openshift-marketplace |
kubelet |
certified-operators-r7fnh |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-bs94f |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-bs94f |
Started |
Started container extract-utilities | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:40.056541 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:02:50.052606 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:00.052110 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:10.052879 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:20.051639 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W0611 11:03:20.052502 1 cmd.go:467] Error getting installer pods on current node ci-op-9xx71rvq-1e28e-w667k-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F0611 11:03:20.052533 1 cmd.go:106] timed out waiting for the condition\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 6"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 5; 2 nodes are at revision 6" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 4, desired generation is 5.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 4, desired generation is 5." | |
openshift-console |
multus |
console-69d886b-gz7s8 |
AddedInterface |
Add eth0 [10.130.0.66/23] from ovn-kubernetes | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveConsoleURL |
assetPublicURL changed from to https://console-openshift-console.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com |
openshift-marketplace |
kubelet |
redhat-operators-7m5wd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\n-\u00a0\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com\"),\n\u00a0\u00a0\t\t\"loginURL\": string(\"https://api.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.c\"...),\n\u00a0\u00a0\t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n\u00a0\u00a0\t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com\")}}}},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n" |
openshift-marketplace |
kubelet |
community-operators-bs94f |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-bs94f |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-bs94f |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.16" in 620ms (620ms including waiting) | |
openshift-marketplace |
kubelet |
redhat-marketplace-9qlzk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
default-scheduler |
redhat-operators-7m5wd |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-7m5wd to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-marketplace |
multus |
redhat-operators-7m5wd |
AddedInterface |
Add eth0 [10.130.0.68/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-r7fnh |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.16" in 575ms (575ms including waiting) | |
openshift-marketplace |
kubelet |
certified-operators-r7fnh |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-r7fnh |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-r7fnh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-payloadconfig |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing | |
openshift-marketplace |
kubelet |
redhat-marketplace-9qlzk |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.16" in 655ms (655ms including waiting) | |
openshift-marketplace |
kubelet |
redhat-marketplace-9qlzk |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-9qlzk |
Started |
Started container extract-content | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com in route downloads in namespace openshift-console\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found",Upgradeable changed from False to True ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-8 -n openshift-kube-apiserver because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-r7fnh |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-r7fnh |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-r7fnh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 599ms (599ms including waiting) | |
openshift-marketplace |
kubelet |
redhat-operators-7m5wd |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-7m5wd |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-7m5wd |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.16" | |
openshift-marketplace |
kubelet |
redhat-marketplace-9qlzk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 559ms (559ms including waiting) | |
openshift-marketplace |
kubelet |
redhat-marketplace-9qlzk |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-9qlzk |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-bs94f |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-authentication |
default-scheduler |
oauth-openshift-77cfb9765f-xfv92 |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-77cfb9765f-xfv92 to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-77cfb9765f to 3 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-prunecontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/revision-pruner-6-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-marketplace |
kubelet |
community-operators-bs94f |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-bs94f |
Created |
Created container registry-server | |
openshift-authentication |
replicaset-controller |
oauth-openshift-77cfb9765f |
SuccessfulCreate |
Created pod: oauth-openshift-77cfb9765f-s96nm | |
openshift-marketplace |
kubelet |
community-operators-bs94f |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 571ms (571ms including waiting) | |
openshift-authentication |
replicaset-controller |
oauth-openshift-77cfb9765f |
SuccessfulCreate |
Created pod: oauth-openshift-77cfb9765f-xfv92 | |
openshift-authentication-operator |
cluster-authentication-operator-oauthserverworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing | |
openshift-authentication |
default-scheduler |
oauth-openshift-77cfb9765f-m6z6r |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-77cfb9765f-m6z6r to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-77cfb9765f |
SuccessfulCreate |
Created pod: oauth-openshift-77cfb9765f-m6z6r | |
openshift-marketplace |
kubelet |
redhat-operators-7m5wd |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-7m5wd |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-7m5wd |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.16" in 551ms (551ms including waiting) | |
openshift-authentication |
default-scheduler |
oauth-openshift-77cfb9765f-s96nm |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-77cfb9765f-s96nm to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-8 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container pruner | |
openshift-marketplace |
kubelet |
redhat-operators-7m5wd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-kube-scheduler |
multus |
revision-pruner-6-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.60/23] from ovn-kubernetes | |
| (x3) | openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-m6z6r |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : configmap "v4-0-config-system-cliconfig" not found |
| (x3) | openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-s96nm |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : configmap "v4-0-config-system-cliconfig" not found |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ Â Â "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cee9993b6f"...)}}, Â Â "controllers": []any{string("openshift.io/build"), string("openshift.io/build-config-change"), string("openshift.io/builder-rolebindings"), string("openshift.io/builder-serviceaccount"), ...}, Â Â "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d87b9ddf5e"...)}}, +Â "dockerPullSecret": map[string]any{ +Â "internalRegistryHostname": string("image-registry.openshift-image-registry.svc:5000"), +Â }, Â Â "featureGates": []any{string("BuildCSIVolumes=true")}, Â Â "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, Â Â } | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-payloadconfig |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing | |
openshift-marketplace |
kubelet |
redhat-operators-7m5wd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 531ms (531ms including waiting) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-8 -n openshift-kube-apiserver because it was missing | |
| (x3) | openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-xfv92 |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : configmap "v4-0-config-system-cliconfig" not found |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml | |
openshift-marketplace |
kubelet |
redhat-operators-7m5wd |
Started |
Started container registry-server | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-prunecontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/revision-pruner-6-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager |
kubelet |
controller-manager-7cfc668fc8-mplwz |
Killing |
Stopping container controller-manager | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml | |
| (x4) | openshift-image-registry |
default-scheduler |
image-registry-78579cd8f7-ssfzl |
FailedScheduling |
0/6 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match pod topology spread constraints, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match pod topology spread constraints, 3 Preemption is not helpful for scheduling. |
openshift-marketplace |
kubelet |
redhat-operators-7m5wd |
Created |
Created container registry-server | |
| (x3) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
openshift-controller-manager |
replicaset-controller |
controller-manager-7cfc668fc8 |
SuccessfulDelete |
Deleted pod: controller-manager-7cfc668fc8-mplwz | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-7cfc668fc8 to 2 from 3 | |
openshift-console |
kubelet |
console-69d886b-gz7s8 |
Started |
Started container console | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-6d7d8b6854 to 2 from 3 | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3.") | |
openshift-console |
kubelet |
console-69d886b-gz7s8 |
Created |
Created container console | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-7699bb97f8 to 1 from 0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing changed from False to True ("OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1."),Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container setup | |
openshift-kube-scheduler |
multus |
revision-pruner-6-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.71/23] from ovn-kubernetes | |
| (x3) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-console |
kubelet |
console-69d886b-gz7s8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49c23c640e34ad7886eefc489ccbe4e1d15ab63c3bbd9e1ed2acf73aef3ecb2c" in 5.961s (5.961s including waiting) | |
openshift-controller-manager |
default-scheduler |
controller-manager-7699bb97f8-f78zq |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-6cd8cd5668 to 1 from 0 | |
openshift-authentication-operator |
cluster-authentication-operator-metadata-controller-metadatacontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/cloud-config-8 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container pruner | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7699bb97f8 |
SuccessfulCreate |
Created pod: controller-manager-7699bb97f8-f78zq | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "cloud-config": []any{string("/etc/kubernetes/static-pod-resources/configmaps/cloud-config/clo"...)}, ...}, + "authConfig": map[string]any{ + "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), + }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "imagePolicyConfig": map[string]any{"internalRegistryHostname": string("image-registry.openshift-image-registry.svc:5000")}, ... // 2 identical entries } | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6cd8cd5668 |
SuccessfulCreate |
Created pod: route-controller-manager-6cd8cd5668-5cjz8 | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6cd8cd5668-5cjz8 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container pruner | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6d7d8b6854 |
SuccessfulDelete |
Deleted pod: route-controller-manager-6d7d8b6854-9jxht | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d7d8b6854-9jxht |
Killing |
Stopping container route-controller-manager | |
openshift-controller-manager |
default-scheduler |
controller-manager-7699bb97f8-f78zq |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7699bb97f8-f78zq to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6cd8cd5668-5cjz8 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6cd8cd5668-5cjz8 to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-xfv92 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7788c3460beebdd7e720c71ec3eca004cdfe2003b051103d3fccc7ef087f6eb3" | |
openshift-authentication |
multus |
oauth-openshift-77cfb9765f-xfv92 |
AddedInterface |
Add eth0 [10.130.0.69/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-m6z6r |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7788c3460beebdd7e720c71ec3eca004cdfe2003b051103d3fccc7ef087f6eb3" | |
openshift-console |
kubelet |
console-69d886b-glfxr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49c23c640e34ad7886eefc489ccbe4e1d15ab63c3bbd9e1ed2acf73aef3ecb2c" in 7.541s (7.541s including waiting) | |
openshift-console |
kubelet |
console-69d886b-glfxr |
Created |
Created container console | |
openshift-authentication |
multus |
oauth-openshift-77cfb9765f-s96nm |
AddedInterface |
Add eth0 [10.129.0.70/23] from ovn-kubernetes | |
openshift-authentication |
multus |
oauth-openshift-77cfb9765f-m6z6r |
AddedInterface |
Add eth0 [10.128.0.61/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-69d886b-glfxr |
Started |
Started container console | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-prunecontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/revision-pruner-6-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-8 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-s96nm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7788c3460beebdd7e720c71ec3eca004cdfe2003b051103d3fccc7ef087f6eb3" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7cfc668fc8-d2fkd became leader | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-console |
replicaset-controller |
console-69d886b |
SuccessfulDelete |
Deleted pod: console-69d886b-gz7s8 | |
openshift-kube-scheduler |
multus |
revision-pruner-6-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.71/23] from ovn-kubernetes | |
openshift-route-controller-manager |
multus |
route-controller-manager-6cd8cd5668-5cjz8 |
AddedInterface |
Add eth0 [10.130.0.70/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6cd8cd5668-5cjz8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5484eee39d22c97ef8b258c63a00940d97593abc951acad7aec3117e1d65019" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcd-resources-copy | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6cd8cd5668-5cjz8 |
Started |
Started container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6cd8cd5668-5cjz8 |
Created |
Created container route-controller-manager | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-69d886b to 1 from 2 | |
openshift-controller-manager |
kubelet |
controller-manager-7699bb97f8-f78zq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36405aaf37dd3a4676764e25cebf2d0832944a3b96cc5c3b93ec896d0af969f3" already present on machine | |
openshift-console |
kubelet |
console-69d886b-gz7s8 |
Killing |
Stopping container console | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68ae5e595cb6b6ffa3f6861f7a41a92f5db8e9cd77fabb216dd7a96b9c1b4cf5" already present on machine | |
openshift-console |
default-scheduler |
console-df9898fb7-fpr2h |
Scheduled |
Successfully assigned openshift-console/console-df9898fb7-fpr2h to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-console |
replicaset-controller |
console-df9898fb7 |
SuccessfulCreate |
Created pod: console-df9898fb7-fpr2h | |
| (x10) | openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.7:9980/readyz": dial tcp 10.0.0.7:9980: connect: connection refused |
openshift-console |
replicaset-controller |
console-df9898fb7 |
SuccessfulCreate |
Created pod: console-df9898fb7-27zvt | |
openshift-controller-manager |
multus |
controller-manager-7699bb97f8-f78zq |
AddedInterface |
Add eth0 [10.129.0.72/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-7699bb97f8-f78zq |
Created |
Created container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-7699bb97f8-f78zq |
Started |
Started container controller-manager | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-df9898fb7 to 2 | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcd-metrics | |
openshift-controller-manager |
default-scheduler |
controller-manager-7699bb97f8-rm5lc |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | |
openshift-console |
multus |
console-df9898fb7-fpr2h |
AddedInterface |
Add eth0 [10.129.0.73/23] from ovn-kubernetes | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7cfc668fc8 |
SuccessfulDelete |
Deleted pod: controller-manager-7cfc668fc8-d2fkd | |
openshift-console |
kubelet |
console-df9898fb7-fpr2h |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49c23c640e34ad7886eefc489ccbe4e1d15ab63c3bbd9e1ed2acf73aef3ecb2c" | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-7cfc668fc8 to 1 from 2 | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcd | |
openshift-controller-manager |
kubelet |
controller-manager-7cfc668fc8-d2fkd |
Killing |
Stopping container controller-manager | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7699bb97f8 |
SuccessfulCreate |
Created pod: controller-manager-7699bb97f8-rm5lc | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcd-readyz | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container pruner | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-8 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-7699bb97f8 to 2 from 1 | |
openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-s96nm |
Started |
Started container oauth-openshift | |
openshift-authentication |
replicaset-controller |
oauth-openshift-65668fcd95 |
SuccessfulCreate |
Created pod: oauth-openshift-65668fcd95-ch7kw | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-6cd8cd5668 to 2 from 1 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-77cfb9765f |
SuccessfulDelete |
Deleted pod: oauth-openshift-77cfb9765f-m6z6r | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6cd8cd5668-862rg |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | |
openshift-image-registry |
kubelet |
image-registry-87fbfc4db-j5gnx |
Killing |
Stopping container registry | |
openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-s96nm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7788c3460beebdd7e720c71ec3eca004cdfe2003b051103d3fccc7ef087f6eb3" in 3.31s (3.31s including waiting) | |
openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-s96nm |
Created |
Created container oauth-openshift | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-6d7d8b6854 to 1 from 2 | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d7d8b6854-dlnkl |
Killing |
Stopping container route-controller-manager | |
openshift-image-registry |
replicaset-controller |
image-registry-87fbfc4db |
SuccessfulDelete |
Deleted pod: image-registry-87fbfc4db-j5gnx | |
openshift-image-registry |
deployment-controller |
image-registry |
ScalingReplicaSet |
Scaled down replica set image-registry-87fbfc4db to 0 from 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-77cfb9765f to 2 from 3 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6cd8cd5668 |
SuccessfulCreate |
Created pod: route-controller-manager-6cd8cd5668-862rg | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-65668fcd95 to 1 from 0 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6d7d8b6854 |
SuccessfulDelete |
Deleted pod: route-controller-manager-6d7d8b6854-dlnkl | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" to "All is well",Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment") | |
openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-xfv92 |
Created |
Created container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-m6z6r |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7788c3460beebdd7e720c71ec3eca004cdfe2003b051103d3fccc7ef087f6eb3" in 3.913s (3.913s including waiting) | |
openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-xfv92 |
Started |
Started container oauth-openshift | |
| (x5) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml |
openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-xfv92 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7788c3460beebdd7e720c71ec3eca004cdfe2003b051103d3fccc7ef087f6eb3" in 4.037s (4.037s including waiting) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-8 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-m6z6r |
Started |
Started container oauth-openshift | |
openshift-controller-manager |
default-scheduler |
controller-manager-7699bb97f8-rm5lc |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7699bb97f8-rm5lc to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-m6z6r |
Created |
Created container oauth-openshift | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6cd8cd5668-862rg |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6cd8cd5668-862rg to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-kube-apiserver |
multus |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.72/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-7699bb97f8-rm5lc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36405aaf37dd3a4676764e25cebf2d0832944a3b96cc5c3b93ec896d0af969f3" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-7699bb97f8-rm5lc |
Created |
Created container controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6cd8cd5668-862rg |
Started |
Started container route-controller-manager | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-controller-manager |
multus |
controller-manager-7699bb97f8-rm5lc |
AddedInterface |
Add eth0 [10.128.0.62/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6cd8cd5668-862rg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5484eee39d22c97ef8b258c63a00940d97593abc951acad7aec3117e1d65019" already present on machine | |
openshift-route-controller-manager |
multus |
route-controller-manager-6cd8cd5668-862rg |
AddedInterface |
Add eth0 [10.129.0.74/23] from ovn-kubernetes | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7699bb97f8-rm5lc became leader | |
openshift-controller-manager |
kubelet |
controller-manager-7699bb97f8-rm5lc |
Started |
Started container controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6cd8cd5668-862rg |
Created |
Created container route-controller-manager | |
openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-m6z6r |
Killing |
Stopping container oauth-openshift | |
openshift-kube-apiserver |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container installer | |
openshift-controller-manager |
kubelet |
controller-manager-7cfc668fc8-xtcks |
Killing |
Stopping container controller-manager | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-8 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7cfc668fc8 |
SuccessfulDelete |
Deleted pod: controller-manager-7cfc668fc8-xtcks | |
openshift-image-registry |
default-scheduler |
image-registry-78579cd8f7-ssfzl |
FailedScheduling |
0/6 nodes are available: 1 node(s) didn't match pod topology spread constraints, 2 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 1 node(s) didn't match pod topology spread constraints, 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling. | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7699bb97f8 |
SuccessfulCreate |
Created pod: controller-manager-7699bb97f8-w5mfh | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-7699bb97f8 to 3 from 2 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.16.0-0.nightly-2024-06-10-211334, 0 replicas available" | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-7cfc668fc8 to 0 from 1 | |
| (x3) | openshift-authentication |
default-scheduler |
oauth-openshift-65668fcd95-ch7kw |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-6cd8cd5668 to 3 from 2 | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6cd8cd5668-l6jjn |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.72.13:443/healthz\": dial tcp 172.30.72.13:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-6d7d8b6854 to 0 from 1 | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6d7d8b6854-qjgq9 |
Killing |
Stopping container route-controller-manager | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6d7d8b6854 |
SuccessfulDelete |
Deleted pod: route-controller-manager-6d7d8b6854-qjgq9 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6cd8cd5668 |
SuccessfulCreate |
Created pod: route-controller-manager-6cd8cd5668-l6jjn | |
| (x2) | openshift-controller-manager |
default-scheduler |
controller-manager-7699bb97f8-w5mfh |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-8 -n openshift-kube-apiserver because it was missing | |
openshift-console |
kubelet |
console-df9898fb7-fpr2h |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49c23c640e34ad7886eefc489ccbe4e1d15ab63c3bbd9e1ed2acf73aef3ecb2c" in 6.63s (6.63s including waiting) | |
openshift-console |
kubelet |
console-df9898fb7-fpr2h |
Created |
Created container console | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6cd8cd5668-l6jjn |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6cd8cd5668-l6jjn to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-console |
kubelet |
console-df9898fb7-fpr2h |
Started |
Started container console | |
openshift-route-controller-manager |
multus |
route-controller-manager-6cd8cd5668-l6jjn |
AddedInterface |
Add eth0 [10.128.0.63/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6cd8cd5668-l6jjn |
Started |
Started container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6cd8cd5668-l6jjn |
Created |
Created container route-controller-manager | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.6:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.6:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-8 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-7699bb97f8-w5mfh |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7699bb97f8-w5mfh to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6cd8cd5668-l6jjn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f5484eee39d22c97ef8b258c63a00940d97593abc951acad7aec3117e1d65019" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-7699bb97f8-w5mfh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36405aaf37dd3a4676764e25cebf2d0832944a3b96cc5c3b93ec896d0af969f3" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-controller-manager |
multus |
controller-manager-7699bb97f8-w5mfh |
AddedInterface |
Add eth0 [10.130.0.73/23] from ovn-kubernetes | |
| (x4) | openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/downloads -n openshift-console because it changed |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-6cd8cd5668-l6jjn_c9acf2d9-b050-4d5d-a219-91050162998f became leader | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-oauthserverworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed |
openshift-controller-manager |
kubelet |
controller-manager-7699bb97f8-w5mfh |
Created |
Created container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-7699bb97f8-w5mfh |
Started |
Started container controller-manager | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-65668fcd95 to 0 from 1 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-665dd97ff4 to 1 from 0 | |
openshift-image-registry |
kubelet |
image-registry-87fbfc4db-ps72b |
Unhealthy |
Readiness probe failed: Get "https://10.128.2.13:5000/healthz": dial tcp 10.128.2.13:5000: connect: connection refused | |
openshift-authentication |
replicaset-controller |
oauth-openshift-65668fcd95 |
SuccessfulDelete |
Deleted pod: oauth-openshift-65668fcd95-ch7kw | |
openshift-image-registry |
kubelet |
image-registry-87fbfc4db-ps72b |
ProbeError |
Readiness probe error: Get "https://10.128.2.13:5000/healthz": dial tcp 10.128.2.13:5000: connect: connection refused body: | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.6:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.6:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-console |
kubelet |
downloads-7d87f9854d-rlxjj |
Started |
Started container download-server | |
openshift-console |
kubelet |
downloads-7d87f9854d-rlxjj |
Created |
Created container download-server | |
openshift-console |
kubelet |
downloads-7d87f9854d-rlxjj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:447d77445d22eaa400594e78b989a7fda2b4196f48ee40646e0c556847374572" in 31.282s (31.282s including waiting) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-8 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
replicaset-controller |
oauth-openshift-665dd97ff4 |
SuccessfulCreate |
Created pod: oauth-openshift-665dd97ff4-j8d2d | |
| (x2) | openshift-console |
kubelet |
downloads-7d87f9854d-rlxjj |
Unhealthy |
Readiness probe failed: Get "http://10.131.0.13:8080/": dial tcp 10.131.0.13:8080: connect: connection refused |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com returns '503 Service Unavailable'" | |
| (x2) | openshift-console |
kubelet |
downloads-7d87f9854d-rlxjj |
ProbeError |
Readiness probe error: Get "http://10.131.0.13:8080/": dial tcp 10.131.0.13:8080: connect: connection refused body: |
openshift-console |
kubelet |
downloads-7d87f9854d-v9g6r |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:447d77445d22eaa400594e78b989a7fda2b4196f48ee40646e0c556847374572" in 32.494s (32.494s including waiting) | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.6:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.6:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-controller-manager |
static-pod-installer |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 7 | |
openshift-console |
kubelet |
downloads-7d87f9854d-v9g6r |
Created |
Created container download-server | |
openshift-console |
kubelet |
downloads-7d87f9854d-v9g6r |
Started |
Started container download-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-8 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.6:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.6:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.6:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.6:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-image-registry |
default-scheduler |
image-registry-78579cd8f7-ssfzl |
Scheduled |
Successfully assigned openshift-image-registry/image-registry-78579cd8f7-ssfzl to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-8 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
ProbeError |
Readiness probe error: Get "https://10.0.0.7:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: |
| (x2) | openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.7:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) |
| (x5) | openshift-console |
default-scheduler |
console-df9898fb7-27zvt |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. |
| (x3) | openshift-console |
kubelet |
downloads-7d87f9854d-v9g6r |
ProbeError |
Readiness probe error: Get "http://10.128.2.12:8080/": dial tcp 10.128.2.12:8080: connect: connection refused body: |
| (x3) | openshift-console |
kubelet |
downloads-7d87f9854d-v9g6r |
Unhealthy |
Readiness probe failed: Get "http://10.128.2.12:8080/": dial tcp 10.128.2.12:8080: connect: connection refused |
openshift-image-registry |
multus |
image-registry-78579cd8f7-ssfzl |
AddedInterface |
Add eth0 [10.128.2.14/23] from ovn-kubernetes | |
openshift-image-registry |
kubelet |
image-registry-78579cd8f7-ssfzl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from True to False ("OAuthServerRouteEndpointAccessibleControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 5; 2 nodes are at revision 7\nEtcdMembersAvailable: 4 members are available" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 5; 2 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-2 is unhealthy" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "All is well" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-8 -n openshift-kube-apiserver because it was missing | |
openshift-image-registry |
kubelet |
image-registry-78579cd8f7-ssfzl |
Created |
Created container registry | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
ProbeError |
Startup probe error: Get "https://10.0.0.7:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Unhealthy |
Startup probe failed: Get "https://10.0.0.7:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.6:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.6:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 9 triggered by "required configmap/config has changed,optional configmap/oauth-metadata has been created" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreate |
Revision 8 created because required configmap/config has changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 8 triggered by "required configmap/config has changed" | |
openshift-image-registry |
kubelet |
image-registry-78579cd8f7-ssfzl |
Started |
Started container registry | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 2 nodes are at revision 7" to "NodeInstallerProgressing: 1 node is at revision 0; 2 nodes are at revision 7; 0 nodes have achieved new revision 8",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 7" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 7; 0 nodes have achieved new revision 8" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-9 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.64/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-9 -n openshift-kube-apiserver because it was missing | |
openshift-console |
default-scheduler |
console-df9898fb7-27zvt |
Scheduled |
Successfully assigned openshift-console/console-df9898fb7-27zvt to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
| (x4) | openshift-authentication |
default-scheduler |
oauth-openshift-665dd97ff4-j8d2d |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-console |
kubelet |
console-df9898fb7-27zvt |
Created |
Created container console | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-9 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.75/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container pruner | |
openshift-console |
multus |
console-df9898fb7-27zvt |
AddedInterface |
Add eth0 [10.130.0.74/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-df9898fb7-27zvt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49c23c640e34ad7886eefc489ccbe4e1d15ab63c3bbd9e1ed2acf73aef3ecb2c" already present on machine | |
openshift-console |
kubelet |
console-df9898fb7-27zvt |
Started |
Started container console | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container pruner | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89206cb191ea89871d18b482edd9417d13327fab7091ed43293046345c80c3d7" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95cb052ed20a9c01d1029497da60445a5425edcc6a6f642ebed4f1d5c3411d51" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cert-recovery-controller |
openshift-kube-controller-manager |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-9 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-2 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:3608263143074270988 name:\"ci-op-9xx71rvq-1e28e-w667k-master-0\" peerURLs:\"https://10.0.0.8:2380\" clientURLs:\"https://10.0.0.8:2379\" Healthy:true Took:2.898834ms Error:<nil>} {Member:ID:9039689361178516505 name:\"ci-op-9xx71rvq-1e28e-w667k-master-1\" peerURLs:\"https://10.0.0.6:2380\" clientURLs:\"https://10.0.0.6:2379\" Healthy:true Took:1.997092ms Error:<nil>} {Member:ID:11862787134384716550 name:\"ci-op-9xx71rvq-1e28e-w667k-master-2\" peerURLs:\"https://10.0.0.7:2380\" clientURLs:\"https://10.0.0.7:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.0.7:2379]: context deadline exceeded}]\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-2 is unhealthy" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-2" from revision 5 to 7 because static pod is ready | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.75/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 5; 2 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-2 is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-2 is unhealthy" | |
openshift-image-registry |
kubelet |
image-registry-87fbfc4db-j5gnx |
ProbeError |
Readiness probe error: Get "https://10.131.0.14:5000/healthz": dial tcp 10.131.0.14:5000: connect: connection refused body: | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container pruner | |
openshift-image-registry |
kubelet |
image-registry-87fbfc4db-j5gnx |
Unhealthy |
Readiness probe failed: Get "https://10.131.0.14:5000/healthz": dial tcp 10.131.0.14:5000: connect: connection refused | |
openshift-kube-apiserver |
kubelet |
installer-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Killing |
Stopping container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/cloud-config-9 -n openshift-kube-apiserver because it was missing | |
| (x5) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
ProbeError |
Readiness probe error: Get "https://10.0.0.7:10257/healthz": dial tcp 10.0.0.7:10257: connect: connection refused body: |
| (x5) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.7:10257/healthz": dial tcp 10.0.0.7:10257: connect: connection refused |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 on node ci-op-9xx71rvq-1e28e-w667k-master-2" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-9 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-prunecontroller |
etcd-operator |
PodCreated |
Created Pod/revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-etcd because it was missing | |
openshift-etcd |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container pruner | |
openshift-etcd |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container pruner | |
openshift-authentication |
default-scheduler |
oauth-openshift-665dd97ff4-j8d2d |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-665dd97ff4-j8d2d to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-etcd |
multus |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.65/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-665dd97ff4-j8d2d |
Started |
Started container oauth-openshift | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-9 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-665dd97ff4-j8d2d |
Created |
Created container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-665dd97ff4-j8d2d |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7788c3460beebdd7e720c71ec3eca004cdfe2003b051103d3fccc7ef087f6eb3" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-authentication |
multus |
oauth-openshift-665dd97ff4-j8d2d |
AddedInterface |
Add eth0 [10.128.0.66/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-prunecontroller |
etcd-operator |
PodCreated |
Created Pod/revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-etcd because it was missing | |
openshift-etcd |
multus |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.76/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-665dd97ff4 to 2 from 1 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-77cfb9765f to 1 from 2 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-665dd97ff4 |
SuccessfulCreate |
Created pod: oauth-openshift-665dd97ff4-n8rsd | |
openshift-authentication |
replicaset-controller |
oauth-openshift-77cfb9765f |
SuccessfulDelete |
Deleted pod: oauth-openshift-77cfb9765f-s96nm | |
openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-s96nm |
Killing |
Stopping container oauth-openshift | |
openshift-etcd |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container pruner | |
openshift-etcd |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container pruner | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-prunecontroller |
etcd-operator |
PodCreated |
Created Pod/revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-9 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-2 on node ci-op-9xx71rvq-1e28e-w667k-master-2" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-etcd |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container pruner | |
openshift-etcd |
kubelet |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container pruner | |
openshift-etcd |
multus |
revision-pruner-7-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.76/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-9 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 6; 2 nodes are at revision 7" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-2" from revision 6 to 7 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-9 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-9 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-9 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-9 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-9 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-8-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container installer | |
openshift-kube-apiserver |
multus |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.77/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-9 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 9 triggered by "required configmap/config has changed,optional configmap/oauth-metadata has been created" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreate |
Revision 9 created because required configmap/config has changed,optional configmap/oauth-metadata has been created | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-9-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-9-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container pruner | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-9-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-apiserver because it was missing | |
| (x10) | openshift-apiserver |
default-scheduler |
apiserver-9cf8b6f9b-mbncp |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. |
| (x5) | openshift-authentication |
default-scheduler |
oauth-openshift-665dd97ff4-n8rsd |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 2 nodes are at revision 7; 0 nodes have achieved new revision 8" to "NodeInstallerProgressing: 1 node is at revision 0; 2 nodes are at revision 7; 0 nodes have achieved new revision 9",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 7; 0 nodes have achieved new revision 8" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 7; 0 nodes have achieved new revision 9" | |
openshift-kube-apiserver |
multus |
revision-pruner-9-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.77/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-9-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-9-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.78/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Killing |
Stopping container installer | |
openshift-authentication |
default-scheduler |
oauth-openshift-665dd97ff4-n8rsd |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-665dd97ff4-n8rsd to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-apiserver |
default-scheduler |
apiserver-9cf8b6f9b-mbncp |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-9cf8b6f9b-mbncp to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-mbncp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-665dd97ff4-n8rsd |
Created |
Created container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-665dd97ff4-n8rsd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7788c3460beebdd7e720c71ec3eca004cdfe2003b051103d3fccc7ef087f6eb3" already present on machine | |
openshift-authentication |
multus |
oauth-openshift-665dd97ff4-n8rsd |
AddedInterface |
Add eth0 [10.129.0.79/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-mbncp |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-mbncp |
Created |
Created container openshift-apiserver | |
openshift-authentication |
kubelet |
oauth-openshift-665dd97ff4-n8rsd |
Started |
Started container oauth-openshift | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-mbncp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-mbncp |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-mbncp |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-mbncp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine | |
openshift-apiserver |
multus |
apiserver-9cf8b6f9b-mbncp |
AddedInterface |
Add eth0 [10.129.0.78/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-mbncp |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-mbncp |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-authentication |
replicaset-controller |
oauth-openshift-665dd97ff4 |
SuccessfulCreate |
Created pod: oauth-openshift-665dd97ff4-hbknv | |
openshift-authentication |
replicaset-controller |
oauth-openshift-77cfb9765f |
SuccessfulDelete |
Deleted pod: oauth-openshift-77cfb9765f-xfv92 | |
openshift-authentication |
kubelet |
oauth-openshift-77cfb9765f-xfv92 |
Killing |
Stopping container oauth-openshift | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-665dd97ff4 to 3 from 2 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-77cfb9765f to 0 from 1 | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-tcdpn |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-9cf8b6f9b to 2 from 1 | |
openshift-apiserver |
replicaset-controller |
apiserver-9cf8b6f9b |
SuccessfulCreate |
Created pod: apiserver-9cf8b6f9b-lvcjl | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-tcdpn |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
replicaset-controller |
apiserver-78d6c6c648 |
SuccessfulDelete |
Deleted pod: apiserver-78d6c6c648-tcdpn | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-78d6c6c648 to 1 from 2 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-9-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:3608263143074270988 name:\"ci-op-9xx71rvq-1e28e-w667k-master-0\" peerURLs:\"https://10.0.0.8:2380\" clientURLs:\"https://10.0.0.8:2379\" Healthy:true Took:2.898834ms Error:<nil>} {Member:ID:9039689361178516505 name:\"ci-op-9xx71rvq-1e28e-w667k-master-1\" peerURLs:\"https://10.0.0.6:2380\" clientURLs:\"https://10.0.0.6:2379\" Healthy:true Took:1.997092ms Error:<nil>} {Member:ID:11862787134384716550 name:\"ci-op-9xx71rvq-1e28e-w667k-master-2\" peerURLs:\"https://10.0.0.7:2380\" clientURLs:\"https://10.0.0.7:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.0.7:2379]: context deadline exceeded}]\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-2 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:3608263143074270988 name:\"ci-op-9xx71rvq-1e28e-w667k-master-0\" peerURLs:\"https://10.0.0.8:2380\" clientURLs:\"https://10.0.0.8:2379\" Healthy:true Took:2.898834ms Error:<nil>} {Member:ID:9039689361178516505 name:\"ci-op-9xx71rvq-1e28e-w667k-master-1\" peerURLs:\"https://10.0.0.6:2380\" clientURLs:\"https://10.0.0.6:2379\" Healthy:true Took:1.997092ms Error:<nil>} {Member:ID:11862787134384716550 name:\"ci-op-9xx71rvq-1e28e-w667k-master-2\" peerURLs:\"https://10.0.0.7:2380\" clientURLs:\"https://10.0.0.7:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.0.7:2379]: context deadline exceeded}]\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-kube-apiserver |
kubelet |
installer-9-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container installer | |
openshift-kube-apiserver |
multus |
installer-9-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.79/23] from ovn-kubernetes | |
| (x3) | openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation" |
openshift-kube-apiserver |
kubelet |
installer-9-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:3608263143074270988 name:\"ci-op-9xx71rvq-1e28e-w667k-master-0\" peerURLs:\"https://10.0.0.8:2380\" clientURLs:\"https://10.0.0.8:2379\" Healthy:true Took:2.898834ms Error:<nil>} {Member:ID:9039689361178516505 name:\"ci-op-9xx71rvq-1e28e-w667k-master-1\" peerURLs:\"https://10.0.0.6:2380\" clientURLs:\"https://10.0.0.6:2379\" Healthy:true Took:1.997092ms Error:<nil>} {Member:ID:11862787134384716550 name:\"ci-op-9xx71rvq-1e28e-w667k-master-2\" peerURLs:\"https://10.0.0.7:2380\" clientURLs:\"https://10.0.0.7:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.0.7:2379]: context deadline exceeded}]\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-2 is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" | |
openshift-kube-apiserver |
kubelet |
installer-9-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container installer | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()",Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.6:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.6:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
| (x4) | openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-scripts -n openshift-etcd: cause by changes in data.etcd.env |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 8 triggered by "required configmap/etcd-pod has changed" | |
| (x4) | openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-pod -n openshift-etcd: cause by changes in data.pod.yaml |
| (x4) | openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/restore-etcd-pod -n openshift-etcd: cause by changes in data.pod.yaml |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-8 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-8 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-serving-ca-8 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-peer-client-ca-8 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-client-ca-8 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-8 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionCreate |
Revision 8 created because required configmap/etcd-pod has changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionTriggered |
new revision 8 triggered by "required configmap/etcd-pod has changed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-8 -n openshift-etcd because it was missing | |
| (x4) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-tcdpn |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x3) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-tcdpn |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
openshift-etcd-operator |
openshift-cluster-etcd-operator-prunecontroller |
etcd-operator |
PodCreated |
Created Pod/revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-etcd because it was missing | |
| (x10) | openshift-console |
kubelet |
console-69d886b-glfxr |
Unhealthy |
Startup probe failed: Get "https://10.128.0.59:8443/health": dial tcp 10.128.0.59:8443: connect: connection refused |
openshift-etcd |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container pruner | |
openshift-etcd |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container pruner | |
openshift-etcd |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
multus |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.68/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
multus |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.80/23] from ovn-kubernetes | |
| (x3) | openshift-authentication |
default-scheduler |
oauth-openshift-665dd97ff4-hbknv |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. |
openshift-etcd-operator |
openshift-cluster-etcd-operator-prunecontroller |
etcd-operator |
PodCreated |
Created Pod/revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-etcd because it was missing | |
openshift-etcd |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container pruner | |
openshift-etcd |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container pruner | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 7; 0 nodes have achieved new revision 8"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7; 0 nodes have achieved new revision 8\nEtcdMembersAvailable: 3 members are available" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 7 to 8 because node ci-op-9xx71rvq-1e28e-w667k-master-0 with revision 7 is the oldest | |
openshift-etcd |
multus |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.80/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-prunecontroller |
etcd-operator |
PodCreated |
Created Pod/revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-etcd because it was missing | |
openshift-etcd |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container pruner | |
openshift-etcd |
kubelet |
revision-pruner-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container pruner | |
openshift-etcd |
kubelet |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-authentication |
default-scheduler |
oauth-openshift-665dd97ff4-hbknv |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-665dd97ff4-hbknv to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-etcd |
multus |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.69/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-8-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-etcd because it was missing | |
openshift-authentication |
multus |
oauth-openshift-665dd97ff4-hbknv |
AddedInterface |
Add eth0 [10.130.0.81/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-665dd97ff4-hbknv |
Created |
Created container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-665dd97ff4-hbknv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7788c3460beebdd7e720c71ec3eca004cdfe2003b051103d3fccc7ef087f6eb3" already present on machine | |
openshift-etcd |
kubelet |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
| (x10) | openshift-console |
kubelet |
console-df9898fb7-fpr2h |
Unhealthy |
Startup probe failed: Get "https://10.129.0.73:8443/health": dial tcp 10.129.0.73:8443: connect: connection refused |
openshift-etcd |
kubelet |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-authentication |
kubelet |
oauth-openshift-665dd97ff4-hbknv |
Started |
Started container oauth-openshift | |
| (x11) | openshift-console |
kubelet |
console-69d886b-glfxr |
ProbeError |
Startup probe error: Get "https://10.128.0.59:8443/health": dial tcp 10.128.0.59:8443: connect: connection refused body: |
| (x3) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-tcdpn |
Unhealthy |
Readiness probe failed: Get "https://10.130.0.44:8443/readyz": dial tcp 10.130.0.44:8443: connect: connection refused |
| (x4) | openshift-apiserver |
kubelet |
apiserver-78d6c6c648-tcdpn |
ProbeError |
Readiness probe error: Get "https://10.130.0.44:8443/readyz": dial tcp 10.130.0.44:8443: connect: connection refused body: |
| (x11) | openshift-console |
kubelet |
console-df9898fb7-fpr2h |
ProbeError |
Startup probe error: Get "https://10.129.0.73:8443/health": dial tcp 10.129.0.73:8443: connect: connection refused body: |
openshift-kube-apiserver |
static-pod-installer |
installer-9-ci-op-9xx71rvq-1e28e-w667k-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 9 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-9xx71rvq-1e28e-w667k-master-2" to "GuardControllerDegraded: Missing PodIP in operand kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 on node ci-op-9xx71rvq-1e28e-w667k-master-2" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
| (x10) | openshift-console |
kubelet |
console-df9898fb7-27zvt |
Unhealthy |
Startup probe failed: Get "https://10.130.0.74:8443/health": dial tcp 10.130.0.74:8443: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
| (x10) | openshift-console |
kubelet |
console-df9898fb7-27zvt |
ProbeError |
Startup probe error: Get "https://10.130.0.74:8443/health": dial tcp 10.130.0.74:8443: connect: connection refused body: |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-2 |
KubeAPIReadyz |
readyz=true | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container etcd-readyz | |
openshift-etcd |
static-pod-installer |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 8 | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container etcdctl | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.82/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container guard | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container guard | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from False to True ("RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com returns '503 Service Unavailable'") | |
openshift-console |
replicaset-controller |
console-69d886b |
SuccessfulDelete |
Deleted pod: console-69d886b-glfxr | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-69d886b to 0 from 1 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.16.0-0.nightly-2024-06-10-211334, 0 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.16.0-0.nightly-2024-06-10-211334, 1 replicas available",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com returns '503 Service Unavailable'" to "RouteHealthAvailable: route not yet available, https://console-openshift-console.apps.ci-op-9xx71rvq-1e28e.qe.azure.devcluster.openshift.com returns '503 Service Unavailable'" | |
| (x7) | openshift-apiserver |
default-scheduler |
apiserver-9cf8b6f9b-lvcjl |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodUpdated |
Updated Pod/kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-kube-apiserver because it changed | |
openshift-apiserver |
multus |
apiserver-9cf8b6f9b-lvcjl |
AddedInterface |
Add eth0 [10.130.0.83/23] from ovn-kubernetes | |
openshift-apiserver |
default-scheduler |
apiserver-9cf8b6f9b-lvcjl |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-9cf8b6f9b-lvcjl to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-lvcjl |
Started |
Started container fix-audit-permissions | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7; 0 nodes have achieved new revision 8\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7; 0 nodes have achieved new revision 8\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-0 is unhealthy" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-0 is unhealthy" | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-lvcjl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-lvcjl |
Created |
Created container fix-audit-permissions | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-lvcjl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-lvcjl |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-lvcjl |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-lvcjl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-lvcjl |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-lvcjl |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
replicaset-controller |
apiserver-78d6c6c648 |
SuccessfulDelete |
Deleted pod: apiserver-78d6c6c648-d7kss | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-d7kss |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-78d6c6c648 to 0 from 1 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-9cf8b6f9b to 3 from 2 | |
openshift-apiserver |
replicaset-controller |
apiserver-9cf8b6f9b |
SuccessfulCreate |
Created pod: apiserver-9cf8b6f9b-hqh69 | |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-d7kss |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well") | |
| (x19) | openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ProbeError |
Readiness probe error: Get "https://10.0.0.8:9980/readyz": dial tcp 10.0.0.8:9980: connect: connection refused body: |
openshift-apiserver |
kubelet |
apiserver-78d6c6c648-d7kss |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-78d6c6c648-d7kss pod)" | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container etcd-readyz | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-2" from revision 0 to 9 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 2 nodes are at revision 7; 0 nodes have achieved new revision 9" to "NodeInstallerProgressing: 2 nodes are at revision 7; 1 node is at revision 9",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 7; 0 nodes have achieved new revision 9" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 7; 1 node is at revision 9" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 7 to 9 because node ci-op-9xx71rvq-1e28e-w667k-master-0 with revision 7 is the oldest | |
openshift-kube-apiserver |
multus |
installer-9-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-9-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-9-ci-op-9xx71rvq-1e28e-w667k-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-9-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-9-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Unhealthy |
Startup probe failed: Get "https://10.0.0.8:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ProbeError |
Startup probe error: Get "https://10.0.0.8:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 7; 0 nodes have achieved new revision 8" to "NodeInstallerProgressing: 2 nodes are at revision 7; 1 node is at revision 8",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7; 0 nodes have achieved new revision 8\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-0 is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 7; 1 node is at revision 8\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-0 is unhealthy" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 7 to 8 because static pod is ready | |
openshift-marketplace |
default-scheduler |
certified-operators-bn86m |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-bn86m to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-marketplace |
kubelet |
certified-operators-bn86m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-bn86m |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-bn86m |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.16" | |
openshift-marketplace |
multus |
certified-operators-bn86m |
AddedInterface |
Add eth0 [10.130.0.84/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-bn86m |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-bn86m |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-bn86m |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-bn86m |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.16" in 679ms (679ms including waiting) | |
openshift-marketplace |
kubelet |
certified-operators-bn86m |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
certified-operators-bn86m |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-bn86m |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 557ms (557ms including waiting) | |
openshift-marketplace |
kubelet |
certified-operators-bn86m |
Created |
Created container registry-server | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-1" from revision 7 to 8 because node ci-op-9xx71rvq-1e28e-w667k-master-1 with revision 7 is the oldest | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-8-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-etcd because it was missing | |
openshift-etcd |
kubelet |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
kubelet |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container installer | |
openshift-etcd |
multus |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.81/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container installer | |
openshift-marketplace |
kubelet |
certified-operators-bn86m |
Killing |
Stopping container registry-server | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok [-]shutdown failed: reason withheld readyz check failed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
static-pod-installer |
installer-9-ci-op-9xx71rvq-1e28e-w667k-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 9 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Killing |
Stopping container kube-apiserver | |
| (x5) | openshift-apiserver |
default-scheduler |
apiserver-9cf8b6f9b-hqh69 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. |
openshift-marketplace |
default-scheduler |
redhat-marketplace-5tqft |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-5tqft to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-78d6c6c648-d7kss pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are crashlooping in terminated apiserver-78d6c6c648-d7kss pod)" | |
openshift-marketplace |
kubelet |
redhat-marketplace-5tqft |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.16" | |
openshift-marketplace |
multus |
redhat-marketplace-5tqft |
AddedInterface |
Add eth0 [10.130.0.85/23] from ovn-kubernetes | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-9xx71rvq-1e28e-w667k-master-2_8270cfa9-69f8-4125-8e8d-25db5ccd6d5c became leader | |
openshift-marketplace |
kubelet |
redhat-marketplace-5tqft |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-5tqft |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-5tqft |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-5tqft |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-5tqft |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-5tqft |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.16" in 498ms (498ms including waiting) | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are crashlooping in terminated apiserver-78d6c6c648-d7kss pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()" | |
openshift-marketplace |
kubelet |
redhat-marketplace-5tqft |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-5tqft |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 574ms (574ms including waiting) | |
openshift-marketplace |
kubelet |
redhat-marketplace-5tqft |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
redhat-marketplace-5tqft |
Started |
Started container registry-server | |
openshift-apiserver |
default-scheduler |
apiserver-9cf8b6f9b-hqh69 |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-9cf8b6f9b-hqh69 to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-apiserver |
multus |
apiserver-9cf8b6f9b-hqh69 |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-hqh69 |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-hqh69 |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-hqh69 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-9cf8b6f9b-hqh69 pod)" | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-hqh69 |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-hqh69 |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-hqh69 |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-hqh69 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-hqh69 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41df16ff0bfa036df50519669edcdbd96e6396e816a62a89dc3b326da8c79d79" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-9cf8b6f9b-hqh69 |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-9cf8b6f9b-hqh69 pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-9cf8b6f9b-hqh69 pod)" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-9cf8b6f9b-hqh69 pod)" to "All is well" | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
ProbeError |
Readiness probe error: Get "https://10.0.0.6:9980/readyz": dial tcp 10.0.0.6:9980: connect: connection refused body: | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container etcd | |
openshift-etcd |
static-pod-installer |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 8 | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container etcd-readyz | |
openshift-marketplace |
kubelet |
redhat-marketplace-5tqft |
Killing |
Stopping container registry-server | |
| (x2) | openshift-monitoring |
controllermanager |
alertmanager-main |
NoPods |
No matching pods found |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
default-scheduler |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
default-scheduler |
thanos-querier-567fbb8d4b-nzc6t |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-567fbb8d4b-nzc6t to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-grpc-tls-dhqjjp8v87kee -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
replicaset-controller |
thanos-querier-567fbb8d4b |
SuccessfulCreate |
Created pod: thanos-querier-567fbb8d4b-5wkwj | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-1 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
multus |
alertmanager-main-1 |
AddedInterface |
Add eth0 [10.131.0.15/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfc189a1a1eae1859c452714b8bbc6c66fa5b837717f7da83631ee8de437fc63" | |
openshift-monitoring |
replicaset-controller |
thanos-querier-567fbb8d4b |
SuccessfulCreate |
Created pod: thanos-querier-567fbb8d4b-nzc6t | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing | |
openshift-monitoring |
deployment-controller |
thanos-querier |
ScalingReplicaSet |
Scaled up replica set thanos-querier-567fbb8d4b to 2 | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" : configmap references non-existent config key: ca-bundle.crt | |
openshift-monitoring |
default-scheduler |
alertmanager-main-1 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-1 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | |
openshift-monitoring |
default-scheduler |
thanos-querier-567fbb8d4b-5wkwj |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-567fbb8d4b-5wkwj to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" : secret "alertmanager-main-tls" not found | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f43437c43a1932bd4b62e7561a92cc5d85bf776a0349df7965451bc0482d4483" | |
openshift-monitoring |
multus |
thanos-querier-567fbb8d4b-nzc6t |
AddedInterface |
Add eth0 [10.131.0.16/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.129.2.16/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfc189a1a1eae1859c452714b8bbc6c66fa5b837717f7da83631ee8de437fc63" | |
openshift-monitoring |
multus |
thanos-querier-567fbb8d4b-5wkwj |
AddedInterface |
Add eth0 [10.128.2.15/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f43437c43a1932bd4b62e7561a92cc5d85bf776a0349df7965451bc0482d4483" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfc189a1a1eae1859c452714b8bbc6c66fa5b837717f7da83631ee8de437fc63" in 2.569s (2.569s including waiting) | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfc189a1a1eae1859c452714b8bbc6c66fa5b837717f7da83631ee8de437fc63" in 2.558s (2.558s including waiting) | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f925bb31c2c18c74b574e35352286036c72fcbb4ed95331ea4d1ba5d5b58f173" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-0 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
| (x2) | openshift-monitoring |
controllermanager |
prometheus-k8s |
NoPods |
No matching pods found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 7; 1 node is at revision 8\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-0 is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 7; 1 node is at revision 8\nEtcdMembersAvailable: 3 members are available" | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1d10e2ca4e2a2c1e039e2ece57e28b13daec87608161983a77206ec28a87560" | |
openshift-monitoring |
persistentvolume-controller |
prometheus-data-prometheus-k8s-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Claim prometheus-data-prometheus-k8s-0 Pod prometheus-k8s-0 in StatefulSet prometheus-k8s success | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f43437c43a1932bd4b62e7561a92cc5d85bf776a0349df7965451bc0482d4483" in 3.097s (3.097s including waiting) | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Created |
Created container thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-grpc-tls-9838dml7sm2ad -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Created |
Created container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Started |
Started container thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Created |
Created container thanos-query | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1d10e2ca4e2a2c1e039e2ece57e28b13daec87608161983a77206ec28a87560" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container init-config-reloader | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f43437c43a1932bd4b62e7561a92cc5d85bf776a0349df7965451bc0482d4483" in 3.365s (3.365s including waiting) | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Started |
Started container thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Created |
Created container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-1 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Claim prometheus-data-prometheus-k8s-1 Pod prometheus-k8s-1 in StatefulSet prometheus-k8s success | |
openshift-monitoring |
persistentvolume-controller |
prometheus-data-prometheus-k8s-1 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'disk.csi.azure.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. | |
openshift-monitoring |
disk.csi.azure.com_azure-disk-csi-driver-controller-6d9996db94-26g2j_e15d5c59-0fce-43c8-83b5-3558487b70d5 |
prometheus-data-prometheus-k8s-1 |
Provisioning |
External provisioner is provisioning volume for claim "openshift-monitoring/prometheus-data-prometheus-k8s-1" | |
openshift-monitoring |
persistentvolume-controller |
prometheus-data-prometheus-k8s-1 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openshift-monitoring |
persistentvolume-controller |
prometheus-data-prometheus-k8s-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'disk.csi.azure.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. | |
openshift-monitoring |
disk.csi.azure.com_azure-disk-csi-driver-controller-6d9996db94-26g2j_e15d5c59-0fce-43c8-83b5-3558487b70d5 |
prometheus-data-prometheus-k8s-0 |
Provisioning |
External provisioner is provisioning volume for claim "openshift-monitoring/prometheus-data-prometheus-k8s-0" | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f925bb31c2c18c74b574e35352286036c72fcbb4ed95331ea4d1ba5d5b58f173" | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f925bb31c2c18c74b574e35352286036c72fcbb4ed95331ea4d1ba5d5b58f173" in 2.027s (2.027s including waiting) | |
| (x12) | openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.6:9980/readyz": dial tcp 10.0.0.6:9980: connect: connection refused |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1d10e2ca4e2a2c1e039e2ece57e28b13daec87608161983a77206ec28a87560" in 2.919s (2.919s including waiting) | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfc189a1a1eae1859c452714b8bbc6c66fa5b837717f7da83631ee8de437fc63" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfc189a1a1eae1859c452714b8bbc6c66fa5b837717f7da83631ee8de437fc63" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f925bb31c2c18c74b574e35352286036c72fcbb4ed95331ea4d1ba5d5b58f173" in 2.321s (2.321s including waiting) | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Created |
Created container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Created |
Created container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Started |
Started container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Created |
Created container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1d10e2ca4e2a2c1e039e2ece57e28b13daec87608161983a77206ec28a87560" in 3.088s (3.088s including waiting) | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Created |
Created container kube-rbac-proxy-rules | |
openshift-monitoring |
disk.csi.azure.com_azure-disk-csi-driver-controller-6d9996db94-26g2j_e15d5c59-0fce-43c8-83b5-3558487b70d5 |
prometheus-data-prometheus-k8s-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-fee668ae-ad9a-4fb2-b7c6-7cdc4efc5290 | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Created |
Created container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
disk.csi.azure.com_azure-disk-csi-driver-controller-6d9996db94-26g2j_e15d5c59-0fce-43c8-83b5-3558487b70d5 |
prometheus-data-prometheus-k8s-1 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-045852ab-94c1-4daf-b49e-ab92602b707c | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Created |
Created container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-5wkwj |
Started |
Started container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-567fbb8d4b-nzc6t |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
default-scheduler |
prometheus-k8s-1 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-1 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
default-scheduler |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f925bb31c2c18c74b574e35352286036c72fcbb4ed95331ea4d1ba5d5b58f173" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f925bb31c2c18c74b574e35352286036c72fcbb4ed95331ea4d1ba5d5b58f173" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/thanos-querier-pdb -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f925bb31c2c18c74b574e35352286036c72fcbb4ed95331ea4d1ba5d5b58f173" in 2.205s (2.205s including waiting) | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()" to "All is well" | |
openshift-monitoring |
attachdetach-controller |
prometheus-k8s-1 |
SuccessfulAttachVolume |
AttachVolume.Attach succeeded for volume "pvc-045852ab-94c1-4daf-b49e-ab92602b707c" | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-openshift" changed from "" to "4.16.0-0.nightly-2024-06-10-211334_openshift" |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.16.0-0.nightly-2024-06-10-211334"} {"oauth-apiserver" "4.16.0-0.nightly-2024-06-10-211334"}] to [{"operator" "4.16.0-0.nightly-2024-06-10-211334"} {"oauth-apiserver" "4.16.0-0.nightly-2024-06-10-211334"} {"oauth-openshift" "4.16.0-0.nightly-2024-06-10-211334_openshift"}] | |
| (x15) | openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
ProbeError |
Readiness probe error: Get "https://10.0.0.6:9980/readyz": dial tcp 10.0.0.6:9980: connect: connection refused body: |
openshift-monitoring |
attachdetach-controller |
prometheus-k8s-0 |
SuccessfulAttachVolume |
AttachVolume.Attach succeeded for volume "pvc-fee668ae-ad9a-4fb2-b7c6-7cdc4efc5290" | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container init-config-reloader | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.131.0.17/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfc189a1a1eae1859c452714b8bbc6c66fa5b837717f7da83631ee8de437fc63" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfc189a1a1eae1859c452714b8bbc6c66fa5b837717f7da83631ee8de437fc63" | |
openshift-monitoring |
multus |
prometheus-k8s-1 |
AddedInterface |
Add eth0 [10.128.2.16/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e3ff0220509aee666082ff1316ade676d06a1f7167b2feba51d2ae64e7bb8e7" | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfc189a1a1eae1859c452714b8bbc6c66fa5b837717f7da83631ee8de437fc63" in 2.186s (2.186s including waiting) | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e3ff0220509aee666082ff1316ade676d06a1f7167b2feba51d2ae64e7bb8e7" | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e3ff0220509aee666082ff1316ade676d06a1f7167b2feba51d2ae64e7bb8e7" in 4.375s (4.375s including waiting) | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfc189a1a1eae1859c452714b8bbc6c66fa5b837717f7da83631ee8de437fc63" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f43437c43a1932bd4b62e7561a92cc5d85bf776a0349df7965451bc0482d4483" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-marketplace |
default-scheduler |
redhat-operators-mgz88 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-mgz88 to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container setup | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-mgz88 |
Started |
Started container extract-utilities | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container thanos-sidecar | |
openshift-marketplace |
kubelet |
redhat-operators-mgz88 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e3ff0220509aee666082ff1316ade676d06a1f7167b2feba51d2ae64e7bb8e7" in 4.141s (4.141s including waiting) | |
openshift-marketplace |
multus |
redhat-operators-mgz88 |
AddedInterface |
Add eth0 [10.130.0.86/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container prometheus | |
openshift-marketplace |
kubelet |
redhat-operators-mgz88 |
Created |
Created container extract-utilities | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f43437c43a1932bd4b62e7561a92cc5d85bf776a0349df7965451bc0482d4483" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfc189a1a1eae1859c452714b8bbc6c66fa5b837717f7da83631ee8de437fc63" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container kube-rbac-proxy-web | |
openshift-marketplace |
kubelet |
redhat-operators-mgz88 |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-mgz88 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.16" | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcd-ensure-env-vars | |
openshift-marketplace |
kubelet |
redhat-operators-mgz88 |
Started |
Started container extract-content | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcd-ensure-env-vars | |
openshift-marketplace |
kubelet |
redhat-operators-mgz88 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.16" in 496ms (496ms including waiting) | |
openshift-marketplace |
multus |
community-operators-pkj7w |
AddedInterface |
Add eth0 [10.130.0.87/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-pkj7w |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-pkj7w |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-pkj7w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container kube-rbac-proxy | |
openshift-marketplace |
default-scheduler |
community-operators-pkj7w |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-pkj7w to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:592ec166fee1aabf6b7dfd82cdd541e5cb608f99c7cc41c9ad3841dd1b854776" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-pkj7w |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.16" | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcd-readyz | |
openshift-marketplace |
kubelet |
redhat-operators-mgz88 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
redhat-operators-mgz88 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 591ms (591ms including waiting) | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container etcd-readyz | |
openshift-marketplace |
kubelet |
redhat-operators-mgz88 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-mgz88 |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
community-operators-pkj7w |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-pkj7w |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-pkj7w |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.16" in 8.446s (8.446s including waiting) | |
openshift-marketplace |
kubelet |
redhat-operators-mgz88 |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
community-operators-pkj7w |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 555ms (555ms including waiting) | |
openshift-marketplace |
kubelet |
community-operators-pkj7w |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
community-operators-pkj7w |
Created |
Created container registry-server | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-marketplace |
kubelet |
community-operators-pkj7w |
Started |
Started container registry-server | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
| (x45) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ProbeError |
Readiness probe error: Get "https://10.0.0.8:6443/readyz": dial tcp 10.0.0.8:6443: connect: connection refused body: |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
ProbeError |
Startup probe error: Get "https://10.0.0.6:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Unhealthy |
Startup probe failed: Get "https://10.0.0.6:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-marketplace |
kubelet |
community-operators-bs94f |
Killing |
Stopping container registry-server | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://10.0.0.6:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok healthz check failed | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-1" from revision 7 to 8 because static pod is ready | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 7; 1 node is at revision 8" to "NodeInstallerProgressing: 1 node is at revision 7; 2 nodes are at revision 8",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 7; 1 node is at revision 8\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 7; 2 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-2" from revision 7 to 8 because node ci-op-9xx71rvq-1e28e-w667k-master-2 with revision 7 is the oldest | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-8-ci-op-9xx71rvq-1e28e-w667k-master-2 -n openshift-etcd because it was missing | |
openshift-etcd |
multus |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
AddedInterface |
Add eth0 [10.130.0.88/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container installer | |
openshift-etcd |
kubelet |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing | |
openshift-console |
replicaset-controller |
console-995d678f4 |
SuccessfulCreate |
Created pod: console-995d678f4-wv2br | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-df9898fb7 to 1 from 2 | |
openshift-console |
kubelet |
console-df9898fb7-fpr2h |
Killing |
Stopping container console | |
openshift-console |
replicaset-controller |
console-995d678f4 |
SuccessfulCreate |
Created pod: console-995d678f4-9mbdf | |
openshift-console |
default-scheduler |
console-995d678f4-9mbdf |
Scheduled |
Successfully assigned openshift-console/console-995d678f4-9mbdf to ci-op-9xx71rvq-1e28e-w667k-master-0 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.16.0-0.nightly-2024-06-10-211334, 1 replicas available" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected") | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapUpdated |
Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml | |
openshift-console |
replicaset-controller |
console-df9898fb7 |
SuccessfulDelete |
Deleted pod: console-df9898fb7-fpr2h | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-995d678f4 to 2 | |
| (x3) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/console -n openshift-console because it changed |
openshift-console |
kubelet |
console-995d678f4-9mbdf |
Created |
Created container console | |
openshift-console |
multus |
console-995d678f4-9mbdf |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-995d678f4-9mbdf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49c23c640e34ad7886eefc489ccbe4e1d15ab63c3bbd9e1ed2acf73aef3ecb2c" already present on machine | |
openshift-console |
kubelet |
console-995d678f4-9mbdf |
Started |
Started container console | |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from True to False ("All is well") |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.16.0-0.nightly-2024-06-10-211334, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.16.0-0.nightly-2024-06-10-211334, 2 replicas available" | |
openshift-console |
kubelet |
console-df9898fb7-27zvt |
Killing |
Stopping container console | |
openshift-console |
replicaset-controller |
console-df9898fb7 |
SuccessfulDelete |
Deleted pod: console-df9898fb7-27zvt | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-df9898fb7 to 0 from 1 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 7; 1 node is at revision 9" to "NodeInstallerProgressing: 1 node is at revision 7; 2 nodes are at revision 9",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 7; 1 node is at revision 9" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 7; 2 nodes are at revision 9" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-0" from revision 7 to 9 because static pod is ready | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Killing |
Stopping container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Killing |
Stopping container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Killing |
Stopping container etcd-metrics | |
openshift-etcd |
static-pod-installer |
installer-8-ci-op-9xx71rvq-1e28e-w667k-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 8 | |
| (x2) | openshift-console |
default-scheduler |
console-995d678f4-wv2br |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-9xx71rvq-1e28e-w667k-master-1" from revision 7 to 9 because node ci-op-9xx71rvq-1e28e-w667k-master-1 with revision 7 is the oldest | |
openshift-kube-apiserver |
kubelet |
installer-9-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
multus |
installer-9-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AddedInterface |
Add eth0 [10.129.0.82/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-9-ci-op-9xx71rvq-1e28e-w667k-master-1 -n openshift-kube-apiserver because it was missing | |
| (x12) | openshift-etcd |
kubelet |
etcd-guard-ci-op-9xx71rvq-1e28e-w667k-master-2 |
ProbeError |
Readiness probe error: Get "https://10.0.0.7:9980/readyz": dial tcp 10.0.0.7:9980: connect: connection refused body: |
openshift-kube-apiserver |
kubelet |
installer-9-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-9-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container installer | |
openshift-console |
default-scheduler |
console-995d678f4-wv2br |
Scheduled |
Successfully assigned openshift-console/console-995d678f4-wv2br to ci-op-9xx71rvq-1e28e-w667k-master-1 | |
openshift-console |
multus |
console-995d678f4-wv2br |
AddedInterface |
Add eth0 [10.129.0.83/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-995d678f4-wv2br |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49c23c640e34ad7886eefc489ccbe4e1d15ab63c3bbd9e1ed2acf73aef3ecb2c" already present on machine | |
openshift-console |
kubelet |
console-995d678f4-wv2br |
Created |
Created container console | |
openshift-console |
kubelet |
console-995d678f4-wv2br |
Started |
Started container console | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 7; 2 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 7; 2 nodes are at revision 8\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-2 is unhealthy" | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-2 is unhealthy" |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b0f7d2fbb9eebff4bb5c5ba2b23583f78902bc0fa9917566ebc86a6a2ee6b99" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c7cd88272ec1d0a6e1a9814448acb1744650cc1315124b44a8e7b6e711e96ed" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.6:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: need at least 3 kube-apiservers, got 2" | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
static-pod-installer |
installer-9-ci-op-9xx71rvq-1e28e-w667k-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 9 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://10.0.0.6:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 2" | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
Unhealthy |
Startup probe failed: Get "https://10.0.0.7:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-etcd |
kubelet |
etcd-ci-op-9xx71rvq-1e28e-w667k-master-2 |
ProbeError |
Startup probe error: Get "https://10.0.0.7:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from False to True ("WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 2") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-2 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:3608263143074270988 name:\"ci-op-9xx71rvq-1e28e-w667k-master-0\" peerURLs:\"https://10.0.0.8:2380\" clientURLs:\"https://10.0.0.8:2379\" Healthy:true Took:1.508937ms Error:<nil>} {Member:ID:9039689361178516505 name:\"ci-op-9xx71rvq-1e28e-w667k-master-1\" peerURLs:\"https://10.0.0.6:2380\" clientURLs:\"https://10.0.0.6:2379\" Healthy:true Took:1.87687ms Error:<nil>} {Member:ID:11862787134384716550 name:\"ci-op-9xx71rvq-1e28e-w667k-master-2\" peerURLs:\"https://10.0.0.7:2380\" clientURLs:\"https://10.0.0.7:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.0.7:2379]: context deadline exceeded}]\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-2 is unhealthy" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 8\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 7; 2 nodes are at revision 8\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-2 is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-2 is unhealthy" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-2" from revision 7 to 8 because static pod is ready | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x11) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x12) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-9xx71rvq-1e28e-w667k-master-1 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]poststarthook/apiservice-discovery-controller ok [-]shutdown failed: reason withheld readyz check failed |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:3608263143074270988 name:\"ci-op-9xx71rvq-1e28e-w667k-master-0\" peerURLs:\"https://10.0.0.8:2380\" clientURLs:\"https://10.0.0.8:2379\" Healthy:true Took:1.508937ms Error:<nil>} {Member:ID:9039689361178516505 name:\"ci-op-9xx71rvq-1e28e-w667k-master-1\" peerURLs:\"https://10.0.0.6:2380\" clientURLs:\"https://10.0.0.6:2379\" Healthy:true Took:1.87687ms Error:<nil>} {Member:ID:11862787134384716550 name:\"ci-op-9xx71rvq-1e28e-w667k-master-2\" peerURLs:\"https://10.0.0.7:2380\" clientURLs:\"https://10.0.0.7:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.0.7:2379]: context deadline exceeded}]\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-2 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-2 is unhealthy" | |
openshift-operator-lifecycle-manager |
default-scheduler |
collect-profiles-28635075-4llz6 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-28635075-4llz6 to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-28635075 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28635075 |
SuccessfulCreate |
Created pod: collect-profiles-28635075-4llz6 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28635075-4llz6 |
Created |
Created container collect-profiles | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-28635075-4llz6 |
AddedInterface |
Add eth0 [10.129.2.17/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28635075-4llz6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28635075-4llz6 |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-28635075, status: Complete | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28635075 |
Completed |
Job completed | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ba197ae2d89cf7ceab51c6f6a8b68df9505128a176b80642977899c52455c68" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bac0ddaf801035bf3d571daea9916b68407c1e9a58a3864616c1ca14e15e74bb" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-9xx71rvq-1e28e-w667k-master-1 |
KubeAPIReadyz |
readyz=true | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from True to False ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-2 is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8\nEtcdMembersAvailable: 3 members are available" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-9xx71rvq-1e28e-w667k-master-2 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
kube-apiserver-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
openshift-apiserver-operator |
CustomResourceDefinitionCreateFailed |
Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-9xx71rvq-1e28e-w667k-master-1" from revision 7 to 9 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 9"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 7; 2 nodes are at revision 9" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 9" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
kubelet |
redhat-marketplace-nwhhb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
multus |
redhat-marketplace-nwhhb |
AddedInterface |
Add eth0 [10.130.0.89/23] from ovn-kubernetes | |
openshift-marketplace |
default-scheduler |
redhat-marketplace-nwhhb |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-nwhhb to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-marketplace |
kubelet |
redhat-marketplace-nwhhb |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-nwhhb |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-nwhhb |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.16" | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-ba57d669e282667128b522794c4c602b successfully generated (release version: 4.16.0-0.nightly-2024-06-10-211334, controller version: 53f3e1eef97a3e1c2cae0b3cbcae3e10f9228d8d) | |
openshift-marketplace |
kubelet |
redhat-marketplace-nwhhb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
redhat-marketplace-nwhhb |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-nwhhb |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-nwhhb |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.16" in 553ms (553ms including waiting) | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
worker |
RenderedConfigGenerated |
rendered-worker-c6b3ce5822afb83a87263de42dbc2483 successfully generated (release version: 4.16.0-0.nightly-2024-06-10-211334, controller version: 53f3e1eef97a3e1c2cae0b3cbcae3e10f9228d8d) | |
openshift-marketplace |
kubelet |
redhat-marketplace-nwhhb |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-nwhhb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 563ms (563ms including waiting) | |
openshift-marketplace |
kubelet |
redhat-marketplace-nwhhb |
Created |
Created container registry-server | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
worker |
SetDesiredConfig |
Targeted node ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp to %!s(*string=0xc0016d2f08) | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
SetDesiredConfig |
Targeted node ci-op-9xx71rvq-1e28e-w667k-master-2 to %!s(*string=0xc0016d2408) | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-9xx71rvq-1e28e-w667k-master-2 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-ba57d669e282667128b522794c4c602b | |
openshift-image-registry |
deployment-controller |
image-registry |
ScalingReplicaSet |
Scaled down replica set image-registry-78579cd8f7 to 1 from 2 | |
openshift-image-registry |
kubelet |
image-registry-ccb445df9-kkwgm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" already present on machine | |
| (x5) | openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DeploymentUpdated |
Updated Deployment.apps/image-registry -n openshift-image-registry because it changed |
openshift-image-registry |
replicaset-controller |
image-registry-ccb445df9 |
SuccessfulCreate |
Created pod: image-registry-ccb445df9-kkwgm | |
openshift-image-registry |
deployment-controller |
image-registry |
ScalingReplicaSet |
Scaled up replica set image-registry-ccb445df9 to 1 | |
openshift-image-registry |
replicaset-controller |
image-registry-ccb445df9 |
SuccessfulCreate |
Created pod: image-registry-ccb445df9-r7bcf | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DeploymentUpdateFailed |
Failed to update Deployment.apps/image-registry -n openshift-image-registry: Operation cannot be fulfilled on deployments.apps "image-registry": the object has been modified; please apply your changes to the latest version and try again | |
openshift-image-registry |
kubelet |
image-registry-78579cd8f7-zxrg2 |
Killing |
Stopping container registry | |
openshift-image-registry |
replicaset-controller |
image-registry-78579cd8f7 |
SuccessfulDelete |
Deleted pod: image-registry-78579cd8f7-zxrg2 | |
openshift-image-registry |
default-scheduler |
image-registry-ccb445df9-kkwgm |
Scheduled |
Successfully assigned openshift-image-registry/image-registry-ccb445df9-kkwgm to ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp | |
openshift-image-registry |
multus |
image-registry-ccb445df9-kkwgm |
AddedInterface |
Add eth0 [10.131.0.18/23] from ovn-kubernetes | |
openshift-image-registry |
deployment-controller |
image-registry |
ScalingReplicaSet |
Scaled up replica set image-registry-ccb445df9 to 2 from 1 | |
openshift-image-registry |
kubelet |
image-registry-ccb445df9-kkwgm |
Created |
Created container registry | |
openshift-image-registry |
kubelet |
image-registry-ccb445df9-kkwgm |
Started |
Started container registry | |
openshift-marketplace |
kubelet |
redhat-marketplace-nwhhb |
Killing |
Stopping container registry-server | |
openshift-marketplace |
default-scheduler |
redhat-operators-49f7g |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-49f7g to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-marketplace |
default-scheduler |
certified-operators-7mlr4 |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-7mlr4 to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-marketplace |
kubelet |
redhat-operators-49f7g |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
default-scheduler |
community-operators-h2smh |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-h2smh to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-marketplace |
multus |
redhat-operators-49f7g |
AddedInterface |
Add eth0 [10.130.0.90/23] from ovn-kubernetes | |
| (x3) | openshift-image-registry |
default-scheduler |
image-registry-ccb445df9-r7bcf |
FailedScheduling |
0/6 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match pod topology spread constraints, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match pod topology spread constraints, 3 Preemption is not helpful for scheduling. |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
SkipReboot |
Config changes do not require reboot. | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
SkipReboot |
Config changes do not require reboot. | |
openshift-marketplace |
multus |
community-operators-h2smh |
AddedInterface |
Add eth0 [10.130.0.92/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-49f7g |
Created |
Created container extract-utilities | |
openshift-marketplace |
multus |
certified-operators-7mlr4 |
AddedInterface |
Add eth0 [10.130.0.91/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-49f7g |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-7mlr4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-7mlr4 |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.16" | |
openshift-marketplace |
kubelet |
redhat-operators-49f7g |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.16" | |
openshift-marketplace |
kubelet |
certified-operators-7mlr4 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-h2smh |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.16" | |
openshift-marketplace |
kubelet |
certified-operators-7mlr4 |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-h2smh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-h2smh |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-h2smh |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-7mlr4 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.16" in 800ms (800ms including waiting) | |
openshift-marketplace |
kubelet |
certified-operators-7mlr4 |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-h2smh |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.16" in 1.036s (1.036s including waiting) | |
openshift-marketplace |
kubelet |
redhat-operators-49f7g |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.16" in 826ms (826ms including waiting) | |
openshift-marketplace |
kubelet |
certified-operators-7mlr4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
redhat-operators-49f7g |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-h2smh |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-7mlr4 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-49f7g |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-h2smh |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-7mlr4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 586ms (586ms including waiting) | |
openshift-marketplace |
kubelet |
community-operators-h2smh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
certified-operators-7mlr4 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-49f7g |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
certified-operators-7mlr4 |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-49f7g |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-49f7g |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-49f7g |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 582ms (582ms including waiting) | |
openshift-marketplace |
kubelet |
community-operators-h2smh |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
community-operators-h2smh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 541ms (541ms including waiting) | |
openshift-marketplace |
kubelet |
community-operators-h2smh |
Started |
Started container registry-server | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp, currentConfig rendered-worker-c6b3ce5822afb83a87263de42dbc2483 to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-worker-c6b3ce5822afb83a87263de42dbc2483 | |
openshift-marketplace |
kubelet |
redhat-operators-49f7g |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
Uncordon |
Update completed for config rendered-worker-c6b3ce5822afb83a87263de42dbc2483 and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
Uncordon |
Update completed for config rendered-master-ba57d669e282667128b522794c4c602b and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-master-2, currentConfig rendered-master-ba57d669e282667128b522794c4c602b to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-ba57d669e282667128b522794c4c602b | |
openshift-image-registry |
deployment-controller |
image-registry |
ScalingReplicaSet |
Scaled down replica set image-registry-78579cd8f7 to 0 from 1 | |
openshift-image-registry |
kubelet |
image-registry-78579cd8f7-ssfzl |
Killing |
Stopping container registry | |
openshift-image-registry |
replicaset-controller |
image-registry-78579cd8f7 |
SuccessfulDelete |
Deleted pod: image-registry-78579cd8f7-ssfzl | |
openshift-marketplace |
kubelet |
community-operators-h2smh |
Killing |
Stopping container registry-server | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
worker |
SetDesiredConfig |
Targeted node ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 to %!s(*string=0xc0016ada08) | |
openshift-marketplace |
kubelet |
certified-operators-7mlr4 |
Killing |
Stopping container registry-server | |
openshift-image-registry |
default-scheduler |
image-registry-ccb445df9-r7bcf |
FailedScheduling |
0/6 nodes are available: 1 node(s) didn't match pod topology spread constraints, 2 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 1 node(s) didn't match pod topology spread constraints, 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling. | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-9xx71rvq-1e28e-w667k-master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-ba57d669e282667128b522794c4c602b | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
SetDesiredConfig |
Targeted node ci-op-9xx71rvq-1e28e-w667k-master-0 to %!s(*string=0xc00165c988) | |
openshift-marketplace |
kubelet |
redhat-operators-49f7g |
Killing |
Stopping container registry-server | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
SkipReboot |
Config changes do not require reboot. | |
openshift-image-registry |
kubelet |
image-registry-ccb445df9-r7bcf |
Created |
Created container registry | |
openshift-image-registry |
multus |
image-registry-ccb445df9-r7bcf |
AddedInterface |
Add eth0 [10.129.2.18/23] from ovn-kubernetes | |
openshift-image-registry |
kubelet |
image-registry-ccb445df9-r7bcf |
Started |
Started container registry | |
openshift-image-registry |
default-scheduler |
image-registry-ccb445df9-r7bcf |
Scheduled |
Successfully assigned openshift-image-registry/image-registry-ccb445df9-r7bcf to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | |
openshift-image-registry |
kubelet |
image-registry-ccb445df9-r7bcf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:739be161a33def82332ba37b9a997041006b673f8379218be7b0ac2d58512d30" already present on machine | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
SkipReboot |
Config changes do not require reboot. | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-worker-c6b3ce5822afb83a87263de42dbc2483 | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9, currentConfig rendered-worker-c6b3ce5822afb83a87263de42dbc2483 to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
Uncordon |
Update completed for config rendered-worker-c6b3ce5822afb83a87263de42dbc2483 and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-master-0, currentConfig rendered-master-ba57d669e282667128b522794c4c602b to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
Uncordon |
Update completed for config rendered-master-ba57d669e282667128b522794c4c602b and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
worker |
SetDesiredConfig |
Targeted node ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 to %!s(*string=0xc000cba148) | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-ba57d669e282667128b522794c4c602b | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
SetDesiredConfig |
Targeted node ci-op-9xx71rvq-1e28e-w667k-master-1 to %!s(*string=0xc0016d2408) | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-9xx71rvq-1e28e-w667k-master-1 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-ba57d669e282667128b522794c4c602b | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
SkipReboot |
Config changes do not require reboot. | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
SkipReboot |
Config changes do not require reboot. | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
Uncordon |
Update completed for config rendered-worker-c6b3ce5822afb83a87263de42dbc2483 and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49, currentConfig rendered-worker-c6b3ce5822afb83a87263de42dbc2483 to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-worker-c6b3ce5822afb83a87263de42dbc2483 | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
Uncordon |
Update completed for config rendered-master-ba57d669e282667128b522794c4c602b and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-master-1, currentConfig rendered-master-ba57d669e282667128b522794c4c602b to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-ba57d669e282667128b522794c4c602b | |
openshift-marketplace |
default-scheduler |
qe-app-registry-jqgvg |
Scheduled |
Successfully assigned openshift-marketplace/qe-app-registry-jqgvg to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | |
openshift-marketplace |
default-scheduler |
qe-app-registry-cfn6j |
Scheduled |
Successfully assigned openshift-marketplace/qe-app-registry-cfn6j to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | |
openshift-marketplace |
kubelet |
qe-app-registry-cfn6j |
Pulling |
Pulling image "quay.io/openshift-qe-optional-operators/aosqe-index:v1.29" | |
openshift-marketplace |
kubelet |
qe-app-registry-cfn6j |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
qe-app-registry-cfn6j |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
qe-app-registry-cfn6j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
multus |
qe-app-registry-cfn6j |
AddedInterface |
Add eth0 [10.129.2.20/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
worker |
RenderedConfigGenerated |
rendered-worker-67bdd916feb50f3e2e66a658aef8c73a successfully generated (release version: 4.16.0-0.nightly-2024-06-10-211334, controller version: 53f3e1eef97a3e1c2cae0b3cbcae3e10f9228d8d) | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-dbe5dc87fcbfbde4aed5058a9e5dd041 successfully generated (release version: 4.16.0-0.nightly-2024-06-10-211334, controller version: 53f3e1eef97a3e1c2cae0b3cbcae3e10f9228d8d) | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
SetDesiredConfig |
Targeted node ci-op-9xx71rvq-1e28e-w667k-master-2 to %!s(*string=0xc000cbb748) | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-9xx71rvq-1e28e-w667k-master-2 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-dbe5dc87fcbfbde4aed5058a9e5dd041 | |
| (x2) | openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-9xx71rvq-1e28e-w667k-master-2 now has machineconfiguration.openshift.io/state=Working |
| (x2) | openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
ConfigDriftMonitorStopped |
Config Drift Monitor stopped |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
worker |
SetDesiredConfig |
Targeted node ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp to %!s(*string=0xc001c50148) | |
| (x2) | openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
ConfigDriftMonitorStopped |
Config Drift Monitor stopped |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
SkipReboot |
Config changes do not require reboot. Service crio was reloaded. | |
openshift-marketplace |
kubelet |
qe-app-registry-cfn6j |
Pulled |
Successfully pulled image "quay.io/openshift-qe-optional-operators/aosqe-index:v1.29" in 15.334s (15.334s including waiting) | |
openshift-marketplace |
kubelet |
qe-app-registry-cfn6j |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
qe-app-registry-cfn6j |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
qe-app-registry-cfn6j |
Created |
Created container extract-content | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
SkipReboot |
Config changes do not require reboot. Service crio was reloaded. | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
Uncordon |
Update completed for config rendered-master-dbe5dc87fcbfbde4aed5058a9e5dd041 and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-master-2, currentConfig rendered-master-dbe5dc87fcbfbde4aed5058a9e5dd041 to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-2 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-dbe5dc87fcbfbde4aed5058a9e5dd041 | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-worker-67bdd916feb50f3e2e66a658aef8c73a | |
openshift-marketplace |
kubelet |
qe-app-registry-cfn6j |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
qe-app-registry-cfn6j |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
qe-app-registry-cfn6j |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 10.749s (10.749s including waiting) | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
Uncordon |
Update completed for config rendered-worker-67bdd916feb50f3e2e66a658aef8c73a and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-worker-centralus1-k2hfp, currentConfig rendered-worker-67bdd916feb50f3e2e66a658aef8c73a to Done | |
| (x4) | openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
DeferringOperatorNodeUpdate |
Deferring update of machine config operator node ci-op-9xx71rvq-1e28e-w667k-master-1 |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-9xx71rvq-1e28e-w667k-master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-dbe5dc87fcbfbde4aed5058a9e5dd041 | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
SetDesiredConfig |
Targeted node ci-op-9xx71rvq-1e28e-w667k-master-0 to %!s(*string=0xc0016bd1c8) | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
worker |
SetDesiredConfig |
Targeted node ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 to %!s(*string=0xc001988988) | |
| (x2) | openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-9xx71rvq-1e28e-w667k-master-0 now has machineconfiguration.openshift.io/state=Working |
| (x2) | openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
ConfigDriftMonitorStopped |
Config Drift Monitor stopped |
| (x2) | openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
ConfigDriftMonitorStopped |
Config Drift Monitor stopped |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
SkipReboot |
Config changes do not require reboot. Service crio was reloaded. | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
SkipReboot |
Config changes do not require reboot. Service crio was reloaded. | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9, currentConfig rendered-worker-67bdd916feb50f3e2e66a658aef8c73a to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
Uncordon |
Update completed for config rendered-worker-67bdd916feb50f3e2e66a658aef8c73a and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus2-xnvk9 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-worker-67bdd916feb50f3e2e66a658aef8c73a | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-dbe5dc87fcbfbde4aed5058a9e5dd041 | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-master-0, currentConfig rendered-master-dbe5dc87fcbfbde4aed5058a9e5dd041 to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-0 |
Uncordon |
Update completed for config rendered-master-dbe5dc87fcbfbde4aed5058a9e5dd041 and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
worker |
SetDesiredConfig |
Targeted node ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 to %!s(*string=0xc000bd91c8) | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-9xx71rvq-1e28e-w667k-master-1 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-dbe5dc87fcbfbde4aed5058a9e5dd041 | |
| (x2) | openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
ConfigDriftMonitorStopped |
Config Drift Monitor stopped |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
SetDesiredConfig |
Targeted node ci-op-9xx71rvq-1e28e-w667k-master-1 to %!s(*string=0xc001988f08) | |
| (x2) | openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
ConfigDriftMonitorStopped |
Config Drift Monitor stopped |
| (x2) | openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-9xx71rvq-1e28e-w667k-master-1 now has machineconfiguration.openshift.io/state=Working |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
SkipReboot |
Config changes do not require reboot. Service crio was reloaded. | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
SkipReboot |
Config changes do not require reboot. Service crio was reloaded. | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49, currentConfig rendered-worker-67bdd916feb50f3e2e66a658aef8c73a to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
Uncordon |
Update completed for config rendered-worker-67bdd916feb50f3e2e66a658aef8c73a and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-worker-67bdd916feb50f3e2e66a658aef8c73a | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
Uncordon |
Update completed for config rendered-master-dbe5dc87fcbfbde4aed5058a9e5dd041 and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
NodeDone |
Setting node ci-op-9xx71rvq-1e28e-w667k-master-1, currentConfig rendered-master-dbe5dc87fcbfbde4aed5058a9e5dd041 to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-9xx71rvq-1e28e-w667k-master-1 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-dbe5dc87fcbfbde4aed5058a9e5dd041 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
CreatedSCCRanges |
created SCC ranges for test-ssh-bastion namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-windows-machine-config-operator namespace | |
openshift-marketplace |
job-controller |
d02b369beb22051919a2926386588dfd6536bb0861a898fc05e6d8defcf1a1e |
SuccessfulCreate |
Created pod: d02b369beb22051919a2926386588dfd6536bb0861a898fc05e6d8defcv66wf | |
openshift-marketplace |
default-scheduler |
d02b369beb22051919a2926386588dfd6536bb0861a898fc05e6d8defcv66wf |
Scheduled |
Successfully assigned openshift-marketplace/d02b369beb22051919a2926386588dfd6536bb0861a898fc05e6d8defcv66wf to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | |
openshift-marketplace |
multus |
d02b369beb22051919a2926386588dfd6536bb0861a898fc05e6d8defcv66wf |
AddedInterface |
Add eth0 [10.129.2.22/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
d02b369beb22051919a2926386588dfd6536bb0861a898fc05e6d8defcv66wf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
kubelet |
d02b369beb22051919a2926386588dfd6536bb0861a898fc05e6d8defcv66wf |
Pulling |
Pulling image "brew.registry.redhat.io/rh-osbs/openshift4-wincw-windows-machine-config-operator-bundle@sha256:6fca255dc3c031f23cc088b6ec59b9c1a273ddd507974c9c196b11beb5e572b5" | |
openshift-marketplace |
kubelet |
d02b369beb22051919a2926386588dfd6536bb0861a898fc05e6d8defcv66wf |
Started |
Started container util | |
openshift-marketplace |
kubelet |
d02b369beb22051919a2926386588dfd6536bb0861a898fc05e6d8defcv66wf |
Created |
Created container util | |
openshift-marketplace |
kubelet |
d02b369beb22051919a2926386588dfd6536bb0861a898fc05e6d8defcv66wf |
Created |
Created container pull | |
openshift-marketplace |
kubelet |
d02b369beb22051919a2926386588dfd6536bb0861a898fc05e6d8defcv66wf |
Pulled |
Successfully pulled image "brew.registry.redhat.io/rh-osbs/openshift4-wincw-windows-machine-config-operator-bundle@sha256:6fca255dc3c031f23cc088b6ec59b9c1a273ddd507974c9c196b11beb5e572b5" in 2.283s (2.283s including waiting) | |
openshift-marketplace |
kubelet |
d02b369beb22051919a2926386588dfd6536bb0861a898fc05e6d8defcv66wf |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
d02b369beb22051919a2926386588dfd6536bb0861a898fc05e6d8defcv66wf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" already present on machine | |
openshift-marketplace |
kubelet |
d02b369beb22051919a2926386588dfd6536bb0861a898fc05e6d8defcv66wf |
Created |
Created container extract | |
openshift-marketplace |
kubelet |
d02b369beb22051919a2926386588dfd6536bb0861a898fc05e6d8defcv66wf |
Started |
Started container extract | |
openshift-marketplace |
job-controller |
d02b369beb22051919a2926386588dfd6536bb0861a898fc05e6d8defcf1a1e |
Completed |
Job completed | |
default |
metrics |
openshift-windows-machine-config-operator |
labelValidationFailed |
Cluster monitoring openshift.io/cluster-monitoring=true label is not enabled in openshift-windows-machine-config-operator namespace | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28635090-jgzfc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28635090 |
SuccessfulCreate |
Created pod: collect-profiles-28635090-jgzfc | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-28635090-jgzfc |
AddedInterface |
Add eth0 [10.129.2.23/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
default-scheduler |
collect-profiles-28635090-jgzfc |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-28635090-jgzfc to ci-op-9xx71rvq-1e28e-w667k-worker-centralus3-hgn49 | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-28635090 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28635090-jgzfc |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28635090-jgzfc |
Created |
Created container collect-profiles | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28635090 |
Completed |
Job completed | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-28635090, status: Complete | |
| (x2) | openshift-machine-api |
azure-controller |
windows-znf27 |
FailedCreate |
InvalidConfiguration: failed to reconcile machine "windows-znf27": failed to create vm windows-znf27: failure sending request for machine windows-znf27: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=403 -- Original Error: Code="LinkedAuthorizationFailed" Message="The client has permission to perform action 'Microsoft.Resources/subscriptions/read' on scope '/subscriptions/53b8f551-f0fc-4bea-8cba-6d1fefd54c8a/resourceGroups/ci-op-9xx71rvq-1e28e-w667k-rg/providers/Microsoft.Compute/virtualMachines/windows-znf27', however the linked subscription '53b8f551-f0fc-4bea-8cba-6d1fefd54c8a2022-datacenter' was not found. " |
| (x2) | openshift-machine-api |
azure-controller |
windows-rk7hv |
FailedCreate |
InvalidConfiguration: failed to reconcile machine "windows-rk7hv": failed to create vm windows-rk7hv: failure sending request for machine windows-rk7hv: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=403 -- Original Error: Code="LinkedAuthorizationFailed" Message="The client has permission to perform action 'Microsoft.Resources/subscriptions/read' on scope '/subscriptions/53b8f551-f0fc-4bea-8cba-6d1fefd54c8a/resourceGroups/ci-op-9xx71rvq-1e28e-w667k-rg/providers/Microsoft.Compute/virtualMachines/windows-rk7hv', however the linked subscription '53b8f551-f0fc-4bea-8cba-6d1fefd54c8a2022-datacenter' was not found. " |
openshift-marketplace |
default-scheduler |
certified-operators-8d9hj |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-8d9hj to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-marketplace |
kubelet |
certified-operators-8d9hj |
Started |
Started container extract-utilities | |
openshift-marketplace |
multus |
certified-operators-8d9hj |
AddedInterface |
Add eth0 [10.130.0.93/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-8d9hj |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.16" | |
openshift-marketplace |
kubelet |
certified-operators-8d9hj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-8d9hj |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-8d9hj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
certified-operators-8d9hj |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-8d9hj |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-8d9hj |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.16" in 541ms (541ms including waiting) | |
openshift-marketplace |
kubelet |
certified-operators-8d9hj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 549ms (549ms including waiting) | |
openshift-marketplace |
kubelet |
certified-operators-8d9hj |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-8d9hj |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-8d9hj |
Killing |
Stopping container registry-server | |
openshift-marketplace |
default-scheduler |
community-operators-86xvj |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-86xvj to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-marketplace |
kubelet |
community-operators-86xvj |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-86xvj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-86xvj |
Started |
Started container extract-utilities | |
openshift-marketplace |
multus |
community-operators-86xvj |
AddedInterface |
Add eth0 [10.130.0.94/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-86xvj |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.16" | |
openshift-marketplace |
kubelet |
community-operators-86xvj |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-86xvj |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.16" in 634ms (634ms including waiting) | |
openshift-marketplace |
kubelet |
community-operators-86xvj |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-86xvj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
kubelet |
community-operators-86xvj |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-86xvj |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
community-operators-86xvj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 640ms (640ms including waiting) | |
openshift-marketplace |
kubelet |
community-operators-86xvj |
Killing |
Stopping container registry-server | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-ci-op-9xx71rvq-1e28e-w667k-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-must-gather-nrc2g namespace | |
openshift-marketplace |
default-scheduler |
redhat-marketplace-mvp6g |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-mvp6g to ci-op-9xx71rvq-1e28e-w667k-master-2 | |
openshift-marketplace |
kubelet |
redhat-marketplace-mvp6g |
Started |
Started container extract-utilities | |
openshift-marketplace |
multus |
redhat-marketplace-mvp6g |
AddedInterface |
Add eth0 [10.130.0.95/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-mvp6g |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c98f79ab486ea5a1d832c1393ca7da8a3131096a54ea4a1779a8a57f7025fdb" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-mvp6g |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-mvp6g |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.16" | |
openshift-marketplace |
kubelet |
redhat-marketplace-mvp6g |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.16" in 862ms (862ms including waiting) | |
openshift-marketplace |
kubelet |
redhat-marketplace-mvp6g |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-mvp6g |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-mvp6g |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" | |
openshift-marketplace |
kubelet |
redhat-marketplace-mvp6g |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-mvp6g |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-mvp6g |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2ac5039a8d9bfb21593e4ee42c94bd445efa8d7be13fd63cd00049ce5db1de" in 568ms (568ms including waiting) | |
openshift-marketplace |
kubelet |
redhat-marketplace-mvp6g |
Killing |
Stopping container registry-server |